node_id int64 0 76.9k | label int64 0 39 | text stringlengths 13 124k | neighbors listlengths 0 3.32k | mask stringclasses 4
values |
|---|---|---|---|---|
42,778 | 24 | Title: Vector Quantized Time Series Generation with a Bidirectional Prior Model
Abstract: Time series generation (TSG) studies have mainly focused on the use of Generative Adversarial Networks (GANs) combined with recurrent neural network (RNN) variants. However, the fundamental limitations and challenges of training GANs still remain. In addition, the RNN-family typically has difficulties with temporal consistency between distant timesteps. Motivated by the successes in the image generation (IMG) domain, we propose TimeVQVAE, the first work, to our knowledge, that uses vector quantization (VQ) techniques to address the TSG problem. Moreover, the priors of the discrete latent spaces are learned with bidirectional transformer models that can better capture global temporal consistency. We also propose VQ modeling in a time-frequency domain, separated into low-frequency (LF) and high-frequency (HF). This allows us to retain important characteristics of the time series and, in turn, generate new synthetic signals that are of better quality, with sharper changes in modularity, than its competing TSG methods. Our experimental evaluation is conducted on all datasets from the UCR archive, using well-established metrics in the IMG literature, such as Fr\'echet inception distance and inception scores. Our implementation on GitHub: \url{https://github.com/ML4ITS/TimeVQVAE}. | [
8668
] | Validation |
42,779 | 24 | Title: TDSTF: Transformer-based Diffusion probabilistic model for Sparse Time series Forecasting
Abstract: Background and objective: In the intensive care unit (ICU), vital sign monitoring is critical, and an accurate predictive system is required. This study will create a novel model to forecast Heart Rate (HR), Systolic Blood Pressure (SBP), and Diastolic Blood Pressure (DBP) in ICU. These vital signs are crucial for prompt interventions for patients. We extracted $24,886$ ICU stays from the MIMIC-III database, which contains data from over $46$ thousand patients, to train and test the model. Methods: The model proposed in this study, areansformerin intensive careabilistic Model for Sparse Time Series Forecasting (TDSTF), uses a deep learning technique called the Transformer. The TDSTF model showed state-of-the-art performance in predicting vital signs in the ICU, outperforming other models' ability to predict distributions of vital signs and being more computationally efficient. The code is available at https://github.com/PingChang818/TDSTF. Results: The results of the study showed that TDSTF achieved a Normalized Average Continuous Ranked Probability Score (NACRPS) of $0.4438$ and a Mean Squared Error (MSE) of $0.4168$, an improvement of $18.9\%$ and $34.3\%$ over the best baseline model, respectively. Conclusion: In conclusion, TDSTF is an effective and efficient solution for forecasting vital signs in the ICU, and it shows a significant improvement compared to other models in the field. | [
15529,
12098,
17941
] | Train |
42,780 | 30 | Title: A Comparative Study of Open-Source Large Language Models, GPT-4 and Claude 2: Multiple-Choice Test Taking in Nephrology
Abstract: In recent years, there have been significant breakthroughs in the field of natural language processing, particularly with the development of large language models (LLMs). These LLMs have showcased remarkable capabilities on various benchmarks. In the healthcare field, the exact role LLMs and other future AI models will play remains unclear. There is a potential for these models in the future to be used as part of adaptive physician training, medical co-pilot applications, and digital patient interaction scenarios. The ability of AI models to participate in medical training and patient care will depend in part on their mastery of the knowledge content of specific medical fields. This study investigated the medical knowledge capability of LLMs, specifically in the context of internal medicine subspecialty multiple-choice test-taking ability. We compared the performance of several open-source LLMs (Koala 7B, Falcon 7B, Stable-Vicuna 13B, and Orca Mini 13B), to GPT-4 and Claude 2 on multiple-choice questions in the field of Nephrology. Nephrology was chosen as an example of a particularly conceptually complex subspecialty field within internal medicine. The study was conducted to evaluate the ability of LLM models to provide correct answers to nephSAP (Nephrology Self-Assessment Program) multiple-choice questions. The overall success of open-sourced LLMs in answering the 858 nephSAP multiple-choice questions correctly was 17.1% - 25.5%. In contrast, Claude 2 answered 54.4% of the questions correctly, whereas GPT-4 achieved a score of 73.3%. We show that current widely used open-sourced LLMs do poorly in their ability for zero-shot reasoning when compared to GPT-4 and Claude 2. The findings of this study potentially have significant implications for the future of subspecialty medical training and patient care. | [
40192,
15301,
11273,
109,
6797,
4635
] | Train |
42,781 | 16 | Title: Few-Shot Geometry-Aware Keypoint Localization
Abstract: Supervised keypoint localization methods rely on large manually labeled image datasets, where objects can deform, articulate, or occlude. However, creating such large keypoint labels is time-consuming and costly, and is often error-prone due to inconsistent labeling. Thus, we desire an approach that can learn keypoint localization with fewer yet consistently annotated images. To this end, we present a novel formulation that learns to localize semantically consistent keypoint definitions, even for occluded regions, for varying object categories. We use a few user-labeled 2D images as input examples, which are extended via self-supervision using a larger unlabeled dataset. Unlike unsupervised methods, the few-shot images act as semantic shape constraints for object localization. Furthermore, we introduce 3D geometry-aware constraints to uplift keypoints, achieving more accurate 2D localization. Our general-purpose formulation paves the way for semantically conditioned generative modeling and attains competitive or state-of-the-art accuracy on several datasets, including human faces, eyes, animals, cars, and never-before-seen mouth interior (teeth) localization tasks, not attempted by the previous few-shot methods. Project page: https://xingzhehe.github.io/FewShot3DKP/ | [] | Train |
42,782 | 24 | Title: Deep Multi-View Subspace Clustering with Anchor Graph
Abstract: Deep multi-view subspace clustering (DMVSC) has recently attracted increasing attention due to its promising performance. However, existing DMVSC methods still have two issues: (1) they mainly focus on using autoencoders to nonlinearly embed the data, while the embedding may be suboptimal for clustering because the clustering objective is rarely considered in autoencoders, and (2) existing methods typically have a quadratic or even cubic complexity, which makes it challenging to deal with large-scale data. To address these issues, in this paper we propose a novel deep multi-view subspace clustering method with anchor graph (DMCAG). To be specific, DMCAG firstly learns the embedded features for each view independently, which are used to obtain the subspace representations. To significantly reduce the complexity, we construct an anchor graph with small size for each view. Then, spectral clustering is performed on an integrated anchor graph to obtain pseudo-labels. To overcome the negative impact caused by suboptimal embedded features, we use pseudo-labels to refine the embedding process to make it more suitable for the clustering task. Pseudo-labels and embedded features are updated alternately. Furthermore, we design a strategy to keep the consistency of the labels based on contrastive learning to enhance the clustering performance. Empirical studies on real-world datasets show that our method achieves superior clustering performance over other state-of-the-art methods. | [] | Validation |
42,783 | 33 | Title: Getting More out of Large Language Models for Proofs
Abstract: Large language models have the potential to simplify formal theorem proving and make it more accessible. But how to get the most out of these models is still an open question. To answer this question, we take a step back and explore the failure cases of these models using common prompting-based techniques. Our talk will discuss these failure cases and what they can teach us about how to get more out of these models. | [
43471
] | Train |
42,784 | 24 | Title: Uncertainty Injection: A Deep Learning Method for Robust Optimization
Abstract: This paper proposes a paradigm of uncertainty injection for training deep learning model to solve robust optimization problems. The majority of existing studies on deep learning focus on the model learning capability, while assuming the quality and accuracy of the inputs data can be guaranteed. However, in realistic applications of deep learning for solving optimization problems, the accuracy of inputs, which are the problem parameters in this case, plays a large role. This is because, in many situations, it is often costly or sometime impossible to obtain the problem parameters accurately, and correspondingly, it is highly desirable to develop learning algorithms that can account for the uncertainties in the input and produce solutions that are robust against these uncertainties. This paper presents a novel uncertainty injection scheme for training machine learning models that are capable of implicitly accounting for the uncertainties and producing statistically robust solutions. We further identify the wireless communications as an application field where uncertainties are prevalent in problem parameters such as the channel coefficients. We show the effectiveness of the proposed training scheme in two applications: the robust power loading for multiuser multiple-input-multiple-output (MIMO) downlink transmissions; and the robust power control for device-to-device (D2D) networks. | [] | Train |
42,785 | 4 | Title: CRS-FL: Conditional Random Sampling for Communication-Efficient and Privacy-Preserving Federated Learning
Abstract: Federated Learning (FL), a privacy-oriented distributed ML paradigm, is being gaining great interest in Internet of Things because of its capability to protect participants data privacy. Studies have been conducted to address challenges existing in standard FL, including communication efficiency and privacy-preserving. But they cannot achieve the goal of making a tradeoff between communication efficiency and model accuracy while guaranteeing privacy. This paper proposes a Conditional Random Sampling (CRS) method and implements it into the standard FL settings (CRS-FL) to tackle the above-mentioned challenges. CRS explores a stochastic coefficient based on Poisson sampling to achieve a higher probability of obtaining zero-gradient unbiasedly, and then decreases the communication overhead effectively without model accuracy degradation. Moreover, we dig out the relaxation Local Differential Privacy (LDP) guarantee conditions of CRS theoretically. Extensive experiment results indicate that (1) in communication efficiency, CRS-FL performs better than the existing methods in metric accuracy per transmission byte without model accuracy reduction in more than 7% sampling ratio (# sampling size / # model size); (2) in privacy-preserving, CRS-FL achieves no accuracy reduction compared with LDP baselines while holding the efficiency, even exceeding them in model accuracy under more sampling ratio conditions. | [] | Validation |
42,786 | 16 | Title: Tuning computer vision models with task rewards
Abstract: Misalignment between model predictions and intended usage can be detrimental for the deployment of computer vision models. The issue is exacerbated when the task involves complex structured outputs, as it becomes harder to design procedures which address this misalignment. In natural language processing, this is often addressed using reinforcement learning techniques that align models with a task reward. We adopt this approach and show its surprising effectiveness across multiple computer vision tasks, such as object detection, panoptic segmentation, colorization and image captioning. We believe this approach has the potential to be widely useful for better aligning models with a diverse range of computer vision tasks. | [
15809,
32706,
5387,
36214,
44537,
11199
] | Train |
42,787 | 30 | Title: A Survey on Event-Based News Narrative Extraction
Abstract: Narratives are fundamental to our understanding of the world, providing us with a natural structure for knowledge representation over time. Computational narrative extraction is a subfield of artificial intelligence that makes heavy use of information retrieval and natural language processing techniques. Despite the importance of computational narrative extraction, relatively little scholarly work exists on synthesizing previous research and strategizing future research in the area. In particular, this article focuses on extracting news narratives from an event-centric perspective. Extracting narratives from news data has multiple applications in understanding the evolving information landscape. This survey presents an extensive study of research in the area of event-based news narrative extraction. In particular, we screened more than 900 articles, which yielded 54 relevant articles. These articles are synthesized and organized by representation model, extraction criteria, and evaluation approaches. Based on the reviewed studies, we identify recent trends, open challenges, and potential research lines. | [] | Train |
42,788 | 24 | Title: SLPerf: a Unified Framework for Benchmarking Split Learning
Abstract: Data privacy concerns has made centralized training of data, which is scattered across silos, infeasible, leading to the need for collaborative learning frameworks. To address that, two prominent frameworks emerged, i.e., federated learning (FL) and split learning (SL). While FL has established various benchmark frameworks and research libraries,SL currently lacks a unified library despite its diversity in terms of label sharing, model aggregation, and cut layer choice. This lack of standardization makes comparing SL paradigms difficult. To address this, we propose SLPerf, a unified research framework and open research library for SL, and conduct extensive experiments on four widely-used datasets under both IID and Non-IID data settings. Our contributions include a comprehensive survey of recently proposed SL paradigms, a detailed benchmark comparison of different SL paradigms in different situations, and rich engineering take-away messages and research insights for improving SL paradigms. SLPerf can facilitate SL algorithm development and fair performance comparisons. The code is available at https://github.com/Rainysponge/Split-learning-Attacks . | [] | Validation |
42,789 | 22 | Title: On the Applicability of Language Models to Block-Based Programs
Abstract: Block-based programming languages like Scratch are increasingly popular for programming education and end-user programming. Recent program analyses build on the insight that source code can be modelled using techniques from natural language processing. Many of the regularities of source code that support this approach are due to the syntactic overhead imposed by textual programming languages. This syntactic overhead, however, is precisely what block-based languages remove in order to simplify programming. Consequently, it is unclear how well this modelling approach performs on block-based programming languages. In this paper, we investigate the applicability of language models for the popular block-based programming language Scratch. We model Scratch programs using n-gram models, the most essential type of language model, and transformers, a popular deep learning model. Evaluation on the example tasks of code completion and bug finding confirm that blocks inhibit predictability, but the use of language models is nevertheless feasible. Our findings serve as foundation for improving tooling and analyses for block-based languages. | [] | Train |
42,790 | 36 | Title: Towards Optimal Prior-Free Permissionless Rebate Mechanisms, with applications to Automated Market Makers & Combinatorial Orderflow Auctions
Abstract: Maximal Extractable Value (MEV) has become a critical issue for blockchain ecosystems, as it enables validators or block proposers to extract value by ordering, including or censoring users' transactions. This paper aims to present a formal approach for determining the appropriate compensation for users whose transactions are executed in bundles, as opposed to individually. We explore the impact of MEV on users, discuss the Shapley value as a solution for fair compensation, and delve into the mechanisms of MEV rebates and auctions as a means to undermine the power of the block producer. | [
9562,
23350,
37999
] | Train |
42,791 | 31 | Title: Report from Dagstuhl Seminar 23031: Frontiers of Information Access Experimentation for Research and Education
Abstract: This report documents the program and the outcomes of Dagstuhl Seminar 23031 ``Frontiers of Information Access Experimentation for Research and Education'', which brought together 37 participants from 12 countries. The seminar addressed technology-enhanced information access (information retrieval, recommender systems, natural language processing) and specifically focused on developing more responsible experimental practices leading to more valid results, both for research as well as for scientific education. The seminar brought together experts from various sub-fields of information access, namely IR, RS, NLP, information science, and human-computer interaction to create a joint understanding of the problems and challenges presented by next generation information access systems, from both the research and the experimentation point of views, to discuss existing solutions and impediments, and to propose next steps to be pursued in the area in order to improve not also our research methods and findings but also the education of the new generation of researchers and developers. The seminar featured a series of long and short talks delivered by participants, who helped in setting a common ground and in letting emerge topics of interest to be explored as the main output of the seminar. This led to the definition of five groups which investigated challenges, opportunities, and next steps in the following areas: reality check, i.e. conducting real-world studies, human-machine-collaborative relevance judgment frameworks, overcoming methodological challenges in information retrieval and recommender systems through awareness and education, results-blind reviewing, and guidance for authors. | [
41871
] | Validation |
42,792 | 36 | Title: A compositional game to fairly divide homogeneous cake
Abstract: The central question in the game theory of cake-cutting is how to distribute a finite resource among players in a fair way. Most research has focused on how to do this for a heterogeneous cake in a situation where the players do not have access to each other's valuation function, but I argue that even sharing homogeneous cake can have interesting mechanism design. Here, I introduce a new game, based on the compositional structure of iterated cake-cutting, that in the case of a homogeneous cake has a Nash equilibrium where each of $n$ players gets $1/n$ of the cake. Furthermore, the equilibrium distribution is the result of just $n-1$ cuts, so each player gets a contiguous piece of cake. Naive composition of the `I cut you choose' rule leads to an exponentially unfair cake distribution so suffers from a high price of anarchy. This cost is completely eliminated by the BigPlayer rule. After introducing the game and proving the fairness of the equilibrium, the game is implemented in Haskell and the Open Game engine to make the compositional structure explicit. | [] | Train |
42,793 | 16 | Title: CENSUS-HWR: a large training dataset for offline handwriting recognition
Abstract: Progress in Automated Handwriting Recognition has been hampered by the lack of large training datasets. Nearly all research uses a set of small datasets that often cause models to overfit. We present CENSUS-HWR, a new dataset consisting of full English handwritten words in 1,812,014 gray scale images. A total of 1,865,134 handwritten texts from a vocabulary of 10,711 words in the English language are present in this collection. This dataset is intended to serve handwriting models as a benchmark for deep learning algorithms. This huge English handwriting recognition dataset has been extracted from the US 1930 and 1940 censuses taken by approximately 70,000 enumerators each year. The dataset and the trained model with their weights are freely available to download at https://censustree.org/data.html. | [] | Test |
42,794 | 16 | Title: Meta-Transformer: A Unified Framework for Multimodal Learning
Abstract: Multimodal learning aims to build models that can process and relate information from multiple modalities. Despite years of development in this field, it still remains challenging to design a unified network for processing various modalities ($\textit{e.g.}$ natural language, 2D images, 3D point clouds, audio, video, time series, tabular data) due to the inherent gaps among them. In this work, we propose a framework, named Meta-Transformer, that leverages a $\textbf{frozen}$ encoder to perform multimodal perception without any paired multimodal training data. In Meta-Transformer, the raw input data from various modalities are mapped into a shared token space, allowing a subsequent encoder with frozen parameters to extract high-level semantic features of the input data. Composed of three main components: a unified data tokenizer, a modality-shared encoder, and task-specific heads for downstream tasks, Meta-Transformer is the first framework to perform unified learning across 12 modalities with unpaired data. Experiments on different benchmarks reveal that Meta-Transformer can handle a wide range of tasks including fundamental perception (text, image, point cloud, audio, video), practical application (X-Ray, infrared, hyperspectral, and IMU), and data mining (graph, tabular, and time-series). Meta-Transformer indicates a promising future for developing unified multimodal intelligence with transformers. Code will be available at https://github.com/invictus717/MetaTransformer | [
18404,
38565,
29224,
31375,
17109,
15541,
23545,
37438,
41183
] | Train |
42,795 | 24 | Title: Can We Transfer Noise Patterns? A Multi-environment Spectrum Analysis Model Using Generated Cases
Abstract: Spectrum analysis systems in online water quality testing are designed to detect types and concentrations of pollutants and enable regulatory agencies to respond promptly to pollution incidents. However, spectral data-based testing devices suffer from complex noise patterns when deployed in non-laboratory environments. To make the analysis model applicable to more environments, we propose a noise patterns transferring model, which takes the spectrum of standard water samples in different environments as cases and learns the differences in their noise patterns, thus enabling noise patterns to transfer to unknown samples. Unfortunately, the inevitable sample-level baseline noise makes the model unable to obtain the paired data that only differ in dataset-level environmental noise. To address the problem, we generate a sample-to-sample case-base to exclude the interference of sample-level noise on dataset-level noise learning, enhancing the system's learning performance. Experiments on spectral data with different background noises demonstrate the good noise-transferring ability of the proposed method against baseline systems ranging from wavelet denoising, deep neural networks, and generative models. From this research, we posit that our method can enhance the performance of DL models by generating high-quality cases. The source code is made publicly available online at https://github.com/Magnomic/CNST. | [] | Train |
42,796 | 3 | Title: Toward A Dynamic Comfort Model for Human-Building Interaction in Grid-Interactive Efficient Buildings: Supported by Field Data
Abstract: Controlling building electric loads could alleviate the increasing grid strain caused by the adoption of renewables and electrification. However, current approaches that automatically setback thermostats on the hottest day compromise their efficacy by neglecting human-building interaction (HBI). This study aims to define challenges and opportunities for developing engineering models of HBI to be used in the design of controls for grid-interactive efficient buildings (GEBs). Building system and measured and just-in-time surveyed psychophysiological data were collected from 41 participants in 20 homes from April-September. ASHRAE Standard 55 thermal comfort models for building design were evaluated with these data. Increased error bias was observed with increasing spatiotemporal temperature variations. Unsurprising, considering these models neglect such variance, but questioning their suitability for GEBs controlling thermostat setpoints, and given the observed 4{\deg}F intra-home spatial temperature variation. The results highlight opportunities for reducing these biases in GEBs through a paradigm shift to modeling discomfort instead of comfort, increasing use of low-cost sensors, and models that account for the observed dynamic occupant behavior: of the thermostat setpoint overrides made with 140-minutes of a previous setpoint change, 95% of small changes ( 2{\deg}F) were made with 120-minutes, while 95% of larger changes ( 10{\deg}F) were made within only 70-minutes. | [] | Validation |
42,797 | 24 | Title: Towards Generalizable Neural Solvers for Vehicle Routing Problems via Ensemble with Transferrable Local Policy
Abstract: Machine learning has been adapted to help solve NP-hard combinatorial optimization problems. One prevalent way is learning to construct solutions by deep neural networks, which has been receiving more and more attention due to the high efficiency and less requirement for expert knowledge. However, many neural construction methods for Vehicle Routing Problems (VRPs) focus on synthetic problem instances with limited scales and specified node distributions, leading to poor performance on real-world problems which usually involve large scales together with complex and unknown node distributions. To make neural VRP solvers more practical in real-world scenarios, we design an auxiliary policy that learns from the local transferable topological features, named local policy, and integrate it with a typical constructive policy (which learns from the global information of VRP instances) to form an ensemble policy. With joint training, the aggregated policies perform cooperatively and complementarily to boost generalization. The experimental results on two well-known benchmarks, TSPLIB and CVRPLIB, of travelling salesman problem and capacitated VRP show that the ensemble policy consistently achieves better generalization than state-of-the-art construction methods and even works well on real-world problems with several thousand nodes. | [
41879,
250,
10842,
41407
] | Validation |
42,798 | 16 | Title: Diffusion Model based Semi-supervised Learning on Brain Hemorrhage Images for Efficient Midline Shift Quantification
Abstract: Brain midline shift (MLS) is one of the most critical factors to be considered for clinical diagnosis and treatment decision-making for intracranial hemorrhage. Existing computational methods on MLS quantification not only require intensive labeling in millimeter-level measurement but also suffer from poor performance due to their dependence on specific landmarks or simplified anatomical assumptions. In this paper, we propose a novel semi-supervised framework to accurately measure the scale of MLS from head CT scans. We formulate the MLS measurement task as a deformation estimation problem and solve it using a few MLS slices with sparse labels. Meanwhile, with the help of diffusion models, we are able to use a great number of unlabeled MLS data and 2793 non-MLS cases for representation learning and regularization. The extracted representation reflects how the image is different from a non-MLS image and regularization serves an important role in the sparse-to-dense refinement of the deformation field. Our experiment on a real clinical brain hemorrhage dataset has achieved state-of-the-art performance and can generate interpretable deformation fields. | [] | Test |
42,799 | 6 | Title: The Challenges of Studying Misinformation on Video-Sharing Platforms During Crises and Mass-Convergence Events
Abstract: Mis- and disinformation can spread rapidly on video-sharing platforms (VSPs). Despite the growing use of VSPs, there has not been a proportional increase in our ability to understand this medium and the messages conveyed through it. In this work, we draw on our prior experiences to outline three core challenges faced in studying VSPs in high-stakes and fast-paced settings: (1) navigating the unique affordances of VSPs, (2) understanding VSP content and determining its authenticity, and (3) novel user behaviors on VSPs for spreading misinformation. By highlighting these challenges, we hope that researchers can reflect on how to adapt existing research methods and tools to these new contexts, or develop entirely new ones. | [
4599
] | Test |
42,800 | 24 | Title: Pruning Meets Low-Rank Parameter-Efficient Fine-Tuning
Abstract: Large pre-trained models (LPMs), such as LLaMA and ViT-G, have shown exceptional performance across various tasks. Although parameter-efficient fine-tuning (PEFT) has emerged to cheaply fine-tune these large models on downstream tasks, their deployment is still hindered by the vast model scale and computational costs. Neural network pruning offers a solution for model compression by removing redundant parameters, but most existing methods rely on computing parameter gradients. However, obtaining the gradients is computationally prohibitive for LPMs, which necessitates the exploration of alternative approaches. To this end, we propose a unified framework for efficient fine-tuning and deployment of LPMs, termed LoRAPrune. We first design a PEFT-aware pruning criterion, which utilizes the values and gradients of Low-Rank Adaption (LoRA), rather than the gradients of pre-trained parameters for importance estimation. We then propose an iterative pruning procedure to remove redundant parameters while maximizing the advantages of PEFT. Thus, our LoRAPrune delivers an accurate, compact model for efficient inference in a highly cost-effective manner. Experimental results on various tasks demonstrate that our method achieves state-of-the-art results. For instance, in the VTAB-1k benchmark, LoRAPrune utilizes only 0.76% of the trainable parameters and outperforms magnitude and movement pruning methods by a significant margin, achieving a mean Top-1 accuracy that is 5.7% and 4.3% higher, respectively. Moreover, our approach achieves comparable performance to PEFT methods, highlighting its efficacy in delivering high-quality results while benefiting from the advantages of pruning. | [
6979,
13700,
38796,
12851,
24156
] | Train |
42,801 | 28 | Title: On Strong Secrecy for Multiple Access Channel with States and causal CSI
Abstract: Strong secrecy communication over a discrete memoryless state-dependent multiple access channel (SD-MAC) with an external eavesdropper is investigated. The channel is governed by discrete memoryless and i.i.d. channel states and the channel state information (CSI) is revealed to the encoders in a causal manner. An inner bound of the capacity is provided. To establish the inner bound, we investigate coding schemes incorporating wiretap coding and secret key agreement between the sender and the legitimate receiver. Two kinds of block Markov coding schemes are studied. The first one uses backward decoding and Wyner-Ziv coding and the secret key is constructed from a lossy reproduction of the CSI. The other one is an extended version of the existing coding scheme for point-to-point wiretap channels with causal CSI. We further investigate some capacity-achieving cases for state-dependent multiple access wiretap channels (SD-MAWCs) with degraded message sets. It turns out that the two coding schemes are both optimal in these cases. | [] | Train |
42,802 | 2 | Title: On Boolean reliability algebra
Abstract: In this paper we consider systems which consist of binary components with known reliabilities. We discuss their algebraic properties and define the corresponding algebraic structure, which we call the reliability algebra. We prove that the reliability algebra is a Boolean algebra. The reliability algebra seems an appropriate context for defining fuzziness measure i.e. membership function with reliability values. | [] | Validation |
42,803 | 16 | Title: Vehicle Safety Management System
Abstract: Overtaking is a critical maneuver in driving that requires accurate information about the location and distance of other vehicles on the road. This study suggests a real-time overtaking assistance system that uses a combination of the You Only Look Once (YOLO) object detection algorithm and stereo vision techniques to accurately identify and locate vehicles in front of the driver, and estimate their distance. The system then signals the vehicles behind the driver using colored lights to inform them of the safe overtaking distance. The proposed system has been implemented using Stereo vision for distance analysis and You Only Look Once (YOLO) for object identification. The results demonstrate its effectiveness in providing vehicle type and the distance between the camera module and the vehicle accurately with an approximate error of 4.107%. Our system has the potential to reduce the risk of accidents and improve the safety of overtaking maneuvers, especially on busy highways and roads. | [] | Train |
42,804 | 16 | Title: 3D Semantic Subspace Traverser: Empowering 3D Generative Model with Shape Editing Capability
Abstract: Shape generation is the practice of producing 3D shapes as various representations for 3D content creation. Previous studies on 3D shape generation have focused on shape quality and structure, without or less considering the importance of semantic information. Consequently, such generative models often fail to preserve the semantic consistency of shape structure or enable manipulation of the semantic attributes of shapes during generation. In this paper, we proposed a novel semantic generative model named 3D Semantic Subspace Traverser that utilizes semantic attributes for category-specific 3D shape generation and editing. Our method utilizes implicit functions as the 3D shape representation and combines a novel latent-space GAN with a linear subspace model to discover semantic dimensions in the local latent space of 3D shapes. Each dimension of the subspace corresponds to a particular semantic attribute, and we can edit the attributes of generated shapes by traversing the coefficients of those dimensions. Experimental results demonstrate that our method can produce plausible shapes with complex structures and enable the editing of semantic attributes. The code and trained models are available at https://github.com/TrepangCat/3D_Semantic_Subspace_Traverser | [
25151
] | Validation |
42,805 | 21 | Title: The Impact of Process Complexity on Process Performance: A Study using Event Log Data
Abstract: Complexity is an important characteristic of any business process. The key assumption of much research in Business Process Management is that process complexity has a negative impact on process performance. So far, behavioral studies have measured complexity based on the perception of process stakeholders. The aim of this study is to investigate if such a connection can be supported based on the analysis of event log data. To do so, we employ a set of 38 metrics that capture different dimensions of process complexity. We use these metrics to build various regression models that explain process performance in terms of throughput time. We find that process complexity as captured in event logs explains the throughput time of process executions to a considerable extent, with the respective R-squared reaching up to 0.96. Our study offers implications for empirical research on process performance and can serve as a toolbox for practitioners. | [] | Train |
42,806 | 10 | Title: Optimal Control of Logically Constrained Partially Observable and Multi-Agent Markov Decision Processes
Abstract: Autonomous systems often have logical constraints arising, for example, from safety, operational, or regulatory requirements. Such constraints can be expressed using temporal logic specifications. The system state is often partially observable. Moreover, it could encompass a team of multiple agents with a common objective but disparate information structures and constraints. In this paper, we first introduce an optimal control theory for partially observable Markov decision processes (POMDPs) with finite linear temporal logic constraints. We provide a structured methodology for synthesizing policies that maximize a cumulative reward while ensuring that the probability of satisfying a temporal logic constraint is sufficiently high. Our approach comes with guarantees on approximate reward optimality and constraint satisfaction. We then build on this approach to design an optimal control framework for logically constrained multi-agent settings with information asymmetry. We illustrate the effectiveness of our approach by implementing it on several case studies. | [] | Train |
42,807 | 16 | Title: Shape-Graph Matching Network (SGM-net): Registration for Statistical Shape Analysis
Abstract: This paper focuses on the statistical analysis of shapes of data objects called shape graphs, a set of nodes connected by articulated curves with arbitrary shapes. A critical need here is a constrained registration of points (nodes to nodes, edges to edges) across objects. This, in turn, requires optimization over the permutation group, made challenging by differences in nodes (in terms of numbers, locations) and edges (in terms of shapes, placements, and sizes) across objects. This paper tackles this registration problem using a novel neural-network architecture and involves an unsupervised loss function developed using the elastic shape metric for curves. This architecture results in (1) state-of-the-art matching performance and (2) an order of magnitude reduction in the computational cost relative to baseline approaches. We demonstrate the effectiveness of the proposed approach using both simulated data and real-world 2D and 3D shape graphs. Code and data will be made publicly available after review to foster research. | [] | Validation |
42,808 | 16 | Title: Rapid-Motion-Track: Markerless Tracking of Fast Human Motion with Deeper Learning
Abstract: Objective The coordination of human movement directly reflects function of the central nervous system. Small deficits in movement are often the first sign of an underlying neurological problem. The objective of this research is to develop a new end-to-end, deep learning-based system, Rapid-Motion-Track (RMT) that can track the fastest human movement accurately when webcams or laptop cameras are used. Materials and Methods We applied RMT to finger tapping, a well-validated test of motor control that is one of the most challenging human motions to track with computer vision due to the small keypoints of digits and the high velocities that are generated. We recorded 160 finger tapping assessments simultaneously with a standard 2D laptop camera (30 frames/sec) and a high-speed wearable sensor-based 3D motion tracking system (250 frames/sec). RMT and a range of DLC models were applied to the video data with tapping frequencies up to 8Hz to extract movement features. Results The movement features (e.g. speed, rhythm, variance) identified with the new RMT system exhibited very high concurrent validity with the gold-standard measurements (97.3\% of RMT measures were within +/-0.5Hz of the Optotrak measures), and outperformed DLC and other advanced computer vision tools (around 88.2\% of DLC measures were within +/-0.5Hz of the Optotrak measures). RMT also accurately tracked a range of other rapid human movements such as foot tapping, head turning and sit-to -stand movements. Conclusion: With the ubiquity of video technology in smart devices, the RMT method holds potential to transform access and accuracy of human movement assessment. | [] | Train |
42,809 | 17 | Title: Reducing Ambiguities in Line-based Density Plots by Image-space Colorization
Abstract: Line-based density plots are used to reduce visual clutter in line charts with a multitude of individual lines. However, these traditional density plots are often perceived ambiguously, which obstructs the user's identification of underlying trends in complex datasets. Thus, we propose a novel image space coloring method for line-based density plots that enhances their interpretability. Our method employs color not only to visually communicate data density but also to highlight similar regions in the plot, allowing users to identify and distinguish trends easily. We achieve this by performing hierarchical clustering based on the lines passing through each region and mapping the identified clusters to the hue circle using circular MDS. Additionally, we propose a heuristic approach to assign each line to the most probable cluster, enabling users to analyze density and individual lines. We motivate our method by conducting a small-scale user study, demonstrating the effectiveness of our method using synthetic and real-world datasets, and providing an interactive online tool for generating colored line-based density plots. | [] | Validation |
42,810 | 4 | Title: Privacy-Preserving 3-Layer Neural Network Training using Mere Homomorphic Encryption Technique
Abstract: In this manuscript, we consider the problem of privacy-preserving training of neural networks in the mere homomorphic encryption setting. We combine several exsiting techniques available, extend some of them, and finally enable the training of 3-layer neural networks for both the regression and classification problems using mere homomorphic encryption technique. | [
29818,
44139
] | Train |
42,811 | 27 | Title: A Survey on Approximate Edge AI for Energy Efficient Autonomous Driving Services
Abstract: Autonomous driving services rely heavily on sensors such as cameras, LiDAR, radar, and communication modules. A common practice of processing the sensed data is using a high-performance computing unit placed inside the vehicle, which deploys AI models and algorithms to act as the brain or administrator of the vehicle. The vehicular data generated from average hours of driving can be up to 20 Terabytes depending on the data rate and specification of the sensors. Given the scale and fast growth of services for autonomous driving, it is essential to improve the overall energy and environmental efficiency, especially in the trend towards vehicular electrification (e.g., battery-powered). Although the areas have seen significant advancements in sensor technologies, wireless communications, computing and AI/ML algorithms, the challenge still exists in how to apply and integrate those technology innovations to achieve energy efficiency. This survey reviews and compares the connected vehicular applications, vehicular communications, approximation and Edge AI techniques. The focus is on energy efficiency by covering newly proposed approximation and enabling frameworks. To the best of our knowledge, this survey is the first to review the latest approximate Edge AI frameworks and publicly available datasets in energy-efficient autonomous driving. The insights and vision from this survey can be beneficial for the collaborative driving service development on low-power and memory-constrained systems and also for the energy optimization of autonomous vehicles. | [] | Train |
42,812 | 30 | Title: DialogVCS: Robust Natural Language Understanding in Dialogue System Upgrade
Abstract: In the constant updates of the product dialogue systems, we need to retrain the natural language understanding (NLU) model as new data from the real users would be merged into the existent data accumulated in the last updates. Within the newly added data, new intents would emerge and might have semantic entanglement with the existing intents, e.g. new intents that are semantically too specific or generic are actually subset or superset of some existing intents in the semantic space, thus impairing the robustness of the NLU model. As the first attempt to solve this problem, we setup a new benchmark consisting of 4 Dialogue Version Control dataSets (DialogVCS). We formulate the intent detection with imperfect data in the system update as a multi-label classification task with positive but unlabeled intents, which asks the models to recognize all the proper intents, including the ones with semantic entanglement, in the inference. We also propose comprehensive baseline models and conduct in-depth analyses for the benchmark, showing that the semantically entangled intents can be effectively recognized with an automatic workflow. | [
12128,
7982
] | Train |
42,813 | 23 | Title: Simulation-Driven Automated End-to-End Test and Oracle Inference
Abstract: This is the first work to report on inferential testing at scale in industry. Specifically, it reports the experience of automated testing of integrity systems at Meta. We built an internal tool called ALPACAS for automated inference of end-to-end integrity tests. Integrity tests are designed to keep users safe online by checking that interventions take place when harmful behaviour occurs on a platform. ALPACAS infers not only the test input, but also the oracle, by observing production interventions to prevent harmful behaviour. This approach allows Meta to automate the process of generating integrity tests for its platforms, such as Facebook and Instagram, which consist of hundreds of millions of lines of production code. We outline the design and deployment of ALPACAS, and report results for its coverage, number of tests produced at each stage of the test inference process, and their pass rates. Specifically, we demonstrate that using ALPACAS significantly improves coverage from a manual test design for the particular aspect of integrity end-to-end testing it was applied to. Further, from a pool of 3 million data points, ALPACAS automatically yields 39 production-ready end-to-end integrity tests. We also report that the ALPACAS-inferred test suite enjoys exceptionally low flakiness for end-to-end testing with its average in-production pass rate of 99.84%. | [] | Train |
42,814 | 30 | Title: Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs
Abstract: With the rapid evolution of large language models (LLMs), new and hard-to-predict harmful capabilities are emerging. This requires developers to be able to identify risks through the evaluation of"dangerous capabilities"in order to responsibly deploy LLMs. In this work, we collect the first open-source dataset to evaluate safeguards in LLMs, and deploy safer open-source LLMs at a low cost. Our dataset is curated and filtered to consist only of instructions that responsible language models should not follow. We annotate and assess the responses of six popular LLMs to these instructions. Based on our annotation, we proceed to train several BERT-like classifiers, and find that these small classifiers can achieve results that are comparable with GPT-4 on automatic safety evaluation. Warning: this paper contains example data that may be offensive, harmful, or biased. | [
514,
33220,
14569,
5071,
10800,
14609,
43641,
45296,
407,
729
] | Train |
42,815 | 30 | Title: Sentence Level Curriculum Learning for Improved Neural Conversational Models
Abstract: Designing machine intelligence to converse with a human user necessarily requires an understanding of how humans participate in conversation, and thus conversation modeling is an important task in natural language processing. New breakthroughs in architecture and data gathering continue to push the performance of such conversational AI models. However, designs neglect the gradual buildup in sentence structure and complexity experienced by humans as we learn to communicate. During training, our model accepts one or more sentences as input and attempts to predict the next sentence in the conversation one word at a time, so our goal is to separate training into segments, with each segment's corpus comprised of longer sentence pairs than the previous one. This will mimic the desired"buildup"component of human learning. We begin with only"short"length sentence pairs, then only"medium"length pairs, and so on. A majority of our experiments were toward optimizing this technique, ensuring a proper representation of the technique's potential, since many of the details were new questions. Our segment-trained models were then able to achieve lower validation loss at the end of training than models trained with standard text preparation. This segmented training is straightforward to implement and our results provide a general direction for future research to implement and improve it. | [] | Train |
42,816 | 11 | Title: Strategic Planning for Flexible Agent Availability in Large Taxi Fleets
Abstract: In large-scale multi-agent systems like taxi fleets, individual agents (taxi drivers) are self-interested (maximizing their own profits) and this can introduce inefficiencies in the system. One such inefficiency is with regard to the"required"availability of taxis at different time periods during the day. Since a taxi driver can work for a limited number of hours in a day (e.g., 8-10 hours in a city like Singapore), there is a need to optimize the specific hours, so as to maximize individual as well as social welfare. Technically, this corresponds to solving a large-scale multi-stage selfish routing game with transition uncertainty. Existing work in addressing this problem is either unable to handle ``driver"constraints (e.g., breaks during work hours) or not scalable. To that end, we provide a novel mechanism that builds on replicator dynamics through ideas from behavior cloning. We demonstrate that our methods provide significantly better policies than the existing approach in terms of improving individual agent revenue and overall agent availability. | [] | Train |
42,817 | 2 | Title: Proof Systems for the Modal μ-Calculus Obtained by Determinizing Automata
Abstract: Automata operating on infinite objects feature prominently in the theory of the modal $\mu$-calculus. One such application concerns the tableau games introduced by Niwi\'{n}ski&Walukiewicz, of which the winning condition for infinite plays can be naturally checked by a nondeterministic parity stream automaton. Inspired by work of Jungteerapanich and Stirling we show how determinization constructions of this automaton may be used to directly obtain proof systems for the $\mu$-calculus. More concretely, we introduce a binary tree construction for determinizing nondeterministic parity stream automata. Using this construction we define the annotated cyclic proof system $\mathsf{BT}$, where formulas are annotated by tuples of binary strings. Soundness and Completeness of this system follow almost immediately from the correctness of the determinization method. | [] | Train |
42,818 | 24 | Title: Causal Razors
Abstract: When performing causal discovery, assumptions have to be made on how the true causal mechanism corresponds to the underlying joint probability distribution. These assumptions are labeled as causal razors in this work. We review numerous causal razors that appeared in the literature, and offer a comprehensive logical comparison of them. In particular, we scrutinize an unpopular causal razor, namely parameter minimality, in multinomial causal models and its logical relations with other well-studied causal razors. Our logical result poses a dilemma in selecting a reasonable scoring criterion for score-based casual search algorithms. | [] | Validation |
42,819 | 16 | Title: Painting 3D Nature in 2D: View Synthesis of Natural Scenes from a Single Semantic Mask
Abstract: We introduce a novel approach that takes a single semantic mask as input to synthesize multi-view consistent color images of natural scenes, trained with a collection of single images from the Internet. Prior works on 3D-aware image synthesis either require multi-view supervision or learning category-level prior for specific classes of objects, which are inapplicable to natural scenes. Our key idea to solve this challenge is to use a semantic field as the intermediate representation, which is easier to reconstruct from an input semantic mask and then translated to a radiance field with the assistance of off-the-shelf semantic image synthesis models. Experiments show that our method outperforms baseline methods and produces photorealistic and multi-view consistent videos of a variety of natural scenes. The project website is https://zju3dv.github.io/paintingnature/. | [
7993,
15445
] | Train |
42,820 | 16 | Title: RigNet++: Efficient Repetitive Image Guided Network for Depth Completion
Abstract: Depth completion aims to recover dense depth maps from sparse ones, where color images are often used to facilitate this task. Recent depth methods primarily focus on image guided learning frameworks. However, blurry guidance in the image and unclear structure in the depth still impede their performance. To tackle these challenges, we explore an efficient repetitive design in our image guided network to gradually and sufficiently recover depth values. Specifically, the efficient repetition is embodied in both the image guidance branch and depth generation branch. In the former branch, we design a dense repetitive hourglass network to extract discriminative image features of complex environments, which can provide powerful contextual instruction for depth prediction. In the latter branch, we introduce a repetitive guidance module based on dynamic convolution, in which an efficient convolution factorization is proposed to reduce the complexity while modeling high-frequency structures progressively. Extensive experiments indicate that our approach achieves superior or competitive results on KITTI, VKITTI, NYUv2, 3D60, and Matterport3D datasets. | [
45506
] | Train |
42,821 | 16 | Title: NEF: Neural Edge Fields for 3D Parametric Curve Reconstruction from Multi-View Images
Abstract: We study the problem of reconstructing 3D feature curves of an object from a set of calibrated multi-view images. To do so, we learn a neural implicit field representing the density distribution of 3D edges which we refer to as Neural Edge Field (NEF). Inspired by NeRF [20], NEF is optimized with a view-based rendering loss where a 2D edge map is rendered at a given view and is compared to the ground-truth edge map extracted from the image of that view. The rendering-based differentiable optimization of NEF fully exploits 2D edge detection, without needing a supervision of 3D edges, a 3D geometric operator or cross-view edge correspondence. Several technical designs are devised to ensure learning a range-limited and view-independent NEF for robust edge extraction. The final parametric 3D curves are extracted from NEF with an iterative optimization method. On our benchmark with synthetic data, we demonstrate that NEF outperforms existing state-of-the-art methods on all metrics. Project page: https://yunfan1202.github.io/NEF/. | [] | Train |
42,822 | 37 | Title: A Semi-Automated Hybrid Schema Matching Framework for Vegetation Data Integration
Abstract: Integrating disparate and distributed vegetation data is critical for consistent and informed national policy development and management. Australia's National Vegetation Information System (NVIS) under the Department of Climate Change, Energy, the Environment and Water (DCCEEW) is the only nationally consistent vegetation database and hierarchical typology of vegetation types in different locations. Currently, this database employs manual approaches for integrating disparate state and territory datasets which is labour intensive and can be prone to human errors. To cope with the ever-increasing need for up to date vegetation data derived from heterogeneous data sources, a Semi-Automated Hybrid Matcher (SAHM) is proposed in this paper. SAHM utilizes both schema level and instance level matching following a two-tier matching framework. A key novel technique in SAHM called Multivariate Statistical Matching is proposed for automated schema scoring which takes advantage of domain knowledge and correlations between attributes to enhance the matching. To verify the effectiveness of the proposed framework, the performance of the individual as well as combined components of SAHM have been evaluated. The empirical evaluation shows the effectiveness of the proposed framework which outperforms existing state of the art methods like Cupid, Coma, Similarity Flooding, Jaccard Leven Matcher, Distribution Based Matcher, and EmbDI. In particular, SAHM achieves between 88% and 100% accuracy with significantly better F1 scores in comparison with state-of-the-art techniques. SAHM is also shown to be several orders of magnitude more efficient than existing techniques. | [] | Train |
42,823 | 10 | Title: Mitigating Human and Computer Opinion Fraud via Contrastive Learning
Abstract: We introduce the novel approach towards fake text reviews detection in collaborative filtering recommender systems. The existing algorithms concentrate on detecting the fake reviews, generated by language models and ignore the texts, written by dishonest users, mostly for monetary gains. We propose the contrastive learning-based architecture, which utilizes the user demographic characteristics, along with the text reviews, as the additional evidence against fakes. This way, we are able to account for two different types of fake reviews spamming and make the recommendation system more robust to biased reviews. | [] | Train |
42,824 | 23 | Title: On the Effect of Instrumentation on Test Flakiness
Abstract: Test flakiness is a problem that affects testing and processes that rely on it. Several factors cause or influence the flakiness of test outcomes. Test execution order, randomness and concurrency are some of the more common and well-studied causes. Some studies mention code instrumentation as a factor that causes or affects test flakiness. However, evidence for this issue is scarce. In this study, we attempt to systematically collect evidence for the effects of instrumentation on test flakiness. We experiment with common types of instrumentation for Java programs—namely, application performance monitoring, coverage and profiling instrumentation. We then study the effects of instrumentation on a set of nine programs obtained from an existing dataset used to study test flakiness, consisting of popular GitHub projects written in Java. We observe cases where realworld instrumentation causes flakiness in a program. However, this effect is rare. We also discuss a related issue—how instrumentation may interfere with flakiness detection and prevention. | [] | Test |
42,825 | 13 | Title: Sparse-firing regularization methods for spiking neural networks with time-to-first spike coding
Abstract: The training of multilayer spiking neural networks (SNNs) using the error backpropagation algorithm has made significant progress in recent years. Among the various training schemes, the error backpropagation method that directly uses the firing time of neurons has attracted considerable attention because it can realize ideal temporal coding. This method uses time-to-first spike (TTFS) coding, in which each neuron fires at most once, and this restriction on the number of firings enables information to be processed at a very low firing frequency. This low firing frequency increases the energy efficiency of information processing in SNNs, which is important not only because of its similarity with information processing in the brain, but also from an engineering point of view. However, only an upper limit has been provided for TTFS-coded SNNs, and the information-processing capability of SNNs at lower firing frequencies has not been fully investigated. In this paper, we propose two spike timing-based sparse-firing (SSR) regularization methods to further reduce the firing frequency of TTFS-coded SNNs. The first is the membrane potential-aware SSR (M-SSR) method, which has been derived as an extreme form of the loss function of the membrane potential value. The second is the firing condition-aware SSR (F-SSR) method, which is a regularization function obtained from the firing conditions. Both methods are characterized by the fact that they only require information about the firing timing and associated weights. The effects of these regularization methods were investigated on the MNIST, Fashion-MNIST, and CIFAR-10 datasets using multilayer perceptron networks and convolutional neural network structures. | [] | Train |
42,826 | 24 | Title: Clifford Group Equivariant Neural Networks
Abstract: We introduce Clifford Group Equivariant Neural Networks: a novel approach for constructing $\mathrm{E}(n)$-equivariant networks. We identify and study the $\textit{Clifford group}$, a subgroup inside the Clifford algebra, whose definition we slightly adjust to achieve several favorable properties. Primarily, the group's action forms an orthogonal automorphism that extends beyond the typical vector space to the entire Clifford algebra while respecting the multivector grading. This leads to several non-equivalent subrepresentations corresponding to the multivector decomposition. Furthermore, we prove that the action respects not just the vector space structure of the Clifford algebra but also its multiplicative structure, i.e., the geometric product. These findings imply that every polynomial in multivectors, including their grade projections, constitutes an equivariant map with respect to the Clifford group, allowing us to parameterize equivariant neural network layers. Notable advantages are that these layers operate directly on a vector basis and elegantly generalize to any dimension. We demonstrate, notably from a single core implementation, state-of-the-art performance on several distinct tasks, including a three-dimensional $n$-body experiment, a four-dimensional Lorentz-equivariant high-energy physics experiment, and a five-dimensional convex hull experiment. | [
24818,
22134,
15407
] | Test |
42,827 | 16 | Title: Vital Videos: A dataset of videos with PPG and blood pressure ground truths
Abstract: We collected a large dataset consisting of nearly 900 unique participants. For every participant we recorded two 30 second uncompressed videos, synchronized PPG waveforms and a single blood pressure measurement. Gender, age and skin color were also registered for every participant. The dataset includes roughly equal numbers of males and females, as well as participants of all ages. While the skin color distribution could have been more balanced, the dataset contains individuals from every skin color. The data was collected in a diverse set of locations to ensure a wide variety of backgrounds and lighting conditions. In an effort to assist in the research and development of remote vital sign measurement we are now opening up access to this dataset. | [] | Test |
42,828 | 23 | Title: Autonomy Is An Acquired Taste: Exploring Developer Preferences for GitHub Bots
Abstract: Software bots fulfill an important role in collective software development, and their adoption by developers promises increased productivity. Past research has identified that bots that communicate too often can irritate developers, which affects the utility of the bot. However, it is not clear what other properties of human-bot collaboration affect developers' preferences, or what impact these properties might have. The main idea of this paper is to explore characteristics affecting developer preferences for interactions between humans and bots, in the context of GitHub pull requests. We carried out an exploratory sequential study with interviews and a subsequent vignette-based survey. We find developers generally prefer bots that are personable but show little autonomy, however, more experienced developers tend to prefer more autonomous bots. Based on this empirical evidence, we recommend bot developers increase configuration options for bots so that individual developers and projects can configure bots to best align with their own preferences and project cultures. | [
4902
] | Train |
42,829 | 6 | Title: HoloTouch: Interacting with Mixed Reality Visualizations Through Smartphone Proxies
Abstract: We contribute interaction techniques for augmenting mixed reality (MR) visualizations with smartphone proxies. By combining head-mounted displays (HMDs) with mobile touchscreens, we can augment low-resolution holographic 3D charts with precise touch input, haptics feedback, high-resolution 2D graphics, and physical manipulation. Our approach aims to complement both MR and physical visualizations. Most current MR visualizations suffer from unreliable tracking, low visual resolution, and imprecise input. Data physicalizations on the other hand, although allowing for natural physical manipulation, are limited in dynamic and interactive modification. We demonstrate how mobile devices such as smartphones or tablets can serve as physical proxies for MR data interactions, creating dynamic visualizations that support precise manipulation and rich input and output. We describe 6 interaction techniques that leverage the combined physicality, sensing, and output capabilities of HMDs and smartphones, and demonstrate those interactions via a prototype system. Based on an evaluation, we outline opportunities for combining the advantages of both MR and physical charts. | [
18616
] | Train |
42,830 | 16 | Title: DeViL: Decoding Vision features into Language
Abstract: Post-hoc explanation methods have often been criticised for abstracting away the decision-making process of deep neural networks. In this work, we would like to provide natural language descriptions for what different layers of a vision backbone have learned. Our DeViL method decodes vision features into language, not only highlighting the attribution locations but also generating textual descriptions of visual features at different layers of the network. We train a transformer network to translate individual image features of any vision layer into a prompt that a separate off-the-shelf language model decodes into natural language. By employing dropout both per-layer and per-spatial-location, our model can generalize training on image-text pairs to generate localized explanations. As it uses a pre-trained language model, our approach is fast to train, can be applied to any vision backbone, and produces textual descriptions at different layers of the vision network. Moreover, DeViL can create open-vocabulary attribution maps corresponding to words or phrases even outside the training scope of the vision model. We demonstrate that DeViL generates textual descriptions relevant to the image content on CC3M surpassing previous lightweight captioning models and attribution maps uncovering the learned concepts of the vision backbone. Finally, we show DeViL also outperforms the current state-of-the-art on the neuron-wise descriptions of the MILANNOTATIONS dataset. Code available at https://github.com/ExplainableML/DeViL | [] | Train |
42,831 | 24 | Title: On the Relationships between Graph Neural Networks for the Simulation of Physical Systems and Classical Numerical Methods
Abstract: Recent developments in Machine Learning approaches for modelling physical systems have begun to mirror the past development of numerical methods in the computational sciences. In this survey, we begin by providing an example of this with the parallels between the development trajectories of graph neural network acceleration for physical simulations and particle-based approaches. We then give an overview of simulation approaches, which have not yet found their way into state-of-the-art Machine Learning methods and hold the potential to make Machine Learning approaches more accurate and more efficient. We conclude by presenting an outlook on the potential of these approaches for making Machine Learning models for science more efficient. | [] | Validation |
42,832 | 27 | Title: Acoustic Beamforming for Object-relative Distance Estimation and Control in Unmanned Air Vehicles using Propulsion System Noise
Abstract: Unmanned air vehicles often produce significant noise from their propulsion systems. Using this broadband signal as"acoustic illumination"for an auxiliary sensing system could make vehicles more robust at a minimal cost. We present an acoustic beamforming-based algorithm that estimates object-relative distance with a small two-microphone array using the generated propulsion system noise of a vehicle. We demonstrate this approach in several closed-loop distance feedback control tests with a mounted quad-rotor vehicle in a noisy environment and show accurate object-relative distance estimates more than 2x further than the baseline channel-based approach. We conclude that this approach is robust to several practical vehicle and noise situations and shows promise for use in more complex operating environments. | [] | Test |
42,833 | 28 | Title: Channel State Information Based User Censoring in Irregular Repetition Slotted Aloha
Abstract: Irregular repetition slotted aloha (IRSA) is a massive random access protocol which can be used to serve a large number of users while achieving a packet loss rate (PLR) close to zero. However, if the number of users is too high, then the system is interference limited and the PLR is close to one. In this paper, we propose a variant of IRSA in the interference limited regime, namely Censored-IRSA (C-IRSA), wherein users with poor channel states censor themselves from transmitting their packets. We theoretically analyze the throughput performance of C-IRSA via density evolution. Using this, we derive closed-form expressions for the optimal choice of the censor threshold which maximizes the throughput while achieving zero PLR among uncensored users. Through extensive numerical simulations, we show that C-IRSA can achieve a 4$\times$ improvement in the peak throughput compared to conventional IRSA. | [] | Train |
42,834 | 18 | Title: Pairs-trading System using Quantum-inspired Combinatorial Optimization Accelerator for Optimal Path Search in Market Graphs
Abstract: Pairs-trading is a trading strategy that involves matching a long position with a short position in two stocks aiming at market-neutral profits. While a typical pairs-trading system monitors the prices of two statistically correlated stocks for detecting a temporary divergence, monitoring and analyzing the prices of more stocks would potentially lead to finding more trading opportunities. Here we report a stock pairs-trading system that finds trading opportunities for any two stocks in an $N$-stock universe using a combinatorial optimization accelerator based on a quantum-inspired algorithm called simulated bifurcation. The trading opportunities are detected through solving an optimal path search problem in an $N$-node directed graph with edge weights corresponding to the products of instantaneous price differences and statistical correlation factors between two stocks. The accelerator is one of Ising machines and operates consecutively to find multiple opportunities in a market situation with avoiding duplicate detections by a tabu search technique. It has been demonstrated in the Tokyo Stock Exchange that the FPGA (field-programmable gate array)-based trading system has a sufficiently low latency (33 $\mu$s for $N$=15 or 210 pairs) to execute the pairs-trading strategy based on optimal path search in market graphs. | [
28656
] | Train |
42,835 | 24 | Title: Guarded Policy Optimization with Imperfect Online Demonstrations
Abstract: The Teacher-Student Framework (TSF) is a reinforcement learning setting where a teacher agent guards the training of a student agent by intervening and providing online demonstrations. Assuming optimal, the teacher policy has the perfect timing and capability to intervene in the learning process of the student agent, providing safety guarantee and exploration guidance. Nevertheless, in many real-world settings it is expensive or even impossible to obtain a well-performing teacher policy. In this work, we relax the assumption of a well-performing teacher and develop a new method that can incorporate arbitrary teacher policies with modest or inferior performance. We instantiate an Off-Policy Reinforcement Learning algorithm, termed Teacher-Student Shared Control (TS2C), which incorporates teacher intervention based on trajectory-based value estimation. Theoretical analysis validates that the proposed TS2C algorithm attains efficient exploration and substantial safety guarantee without being affected by the teacher's own performance. Experiments on various continuous control tasks show that our method can exploit teacher policies at different performance levels while maintaining a low training cost. Moreover, the student policy surpasses the imperfect teacher policy in terms of higher accumulated reward in held-out testing environments. Code is available at https://metadriverse.github.io/TS2C. | [] | Train |
42,836 | 9 | Title: A Fast Algorithm for Consistency Checking Partially Ordered Time
Abstract: Partially ordered models of time occur naturally in applications where agents/processes cannot perfectly communicate with each other, and can be traced back to the seminal work of Lamport. In this paper we consider the problem of deciding if a (likely incomplete) description of a system of events is consistent, the network consistency problem for the point algebra of partially ordered time (POT). While the classical complexity of this problem has been fully settled, comparably little is known of the fine-grained complexity of POT except that it can
be solved in O*((0.368n)^n) time by enumerating ordered partitions. We construct a much faster algorithm with a run-time bounded by O*((0.26n)^n), which, e.g., is roughly 1000 times faster than the naive enumeration algorithm in a problem with 20 events. This is achieved by a sophisticated enumeration of structures similar to total orders, which are then greedily expanded toward a solution. While similar ideas have been explored earlier for related problems it turns out that the analysis for POT is non-trivial and requires significant new ideas. | [] | Validation |
42,837 | 16 | Title: A Continual Learning Approach for Cross-Domain White Blood Cell Classification
Abstract: Accurate classification of white blood cells in peripheral blood is essential for diagnosing hematological diseases. Due to constantly evolving clinical settings, data sources, and disease classifications, it is necessary to update machine learning classification models regularly for practical real-world use. Such models significantly benefit from sequentially learning from incoming data streams without forgetting previously acquired knowledge. However, models can suffer from catastrophic forgetting, causing a drop in performance on previous tasks when fine-tuned on new data. Here, we propose a rehearsal-based continual learning approach for class incremental and domain incremental scenarios in white blood cell classification. To choose representative samples from previous tasks, we employ exemplar set selection based on the model's predictions. This involves selecting the most confident samples and the most challenging samples identified through uncertainty estimation of the model. We thoroughly evaluated our proposed approach on three white blood cell classification datasets that differ in color, resolution, and class composition, including scenarios where new domains or new classes are introduced to the model with every task. We also test a long class incremental experiment with both new domains and new classes. Our results demonstrate that our approach outperforms established baselines in continual learning, including existing iCaRL and EWC methods for classifying white blood cells in cross-domain environments. | [] | Train |
42,838 | 24 | Title: Knowledge-Driven Multi-Agent Reinforcement Learning for Computation Offloading in Cybertwin-Enabled Internet of Vehicles
Abstract: By offloading computation-intensive tasks of vehicles to roadside units (RSUs), mobile edge computing (MEC) in the Internet of Vehicles (IoV) can relieve the onboard computation burden. However, existing model-based task offloading methods suffer from heavy computational complexity with the increase of vehicles and data-driven methods lack interpretability. To address these challenges, in this paper, we propose a knowledge-driven multi-agent reinforcement learning (KMARL) approach to reduce the latency of task offloading in cybertwin-enabled IoV. Specifically, in the considered scenario, the cybertwin serves as a communication agent for each vehicle to exchange information and make offloading decisions in the virtual space. To reduce the latency of task offloading, a KMARL approach is proposed to select the optimal offloading option for each vehicle, where graph neural networks are employed by leveraging domain knowledge concerning graph-structure communication topology and permutation invariance into neural networks. Numerical results show that our proposed KMARL yields higher rewards and demonstrates improved scalability compared with other methods, benefitting from the integration of domain knowledge. | [] | Train |
42,839 | 24 | Title: tSPM+; a high-performance algorithm for mining transitive sequential patterns from clinical data
Abstract: The increasing availability of large clinical datasets collected from patients can enable new avenues for computational characterization of complex diseases using different analytic algorithms. One of the promising new methods for extracting knowledge from large clinical datasets involves temporal pattern mining integrated with machine learning workflows. However, mining these temporal patterns is a computational intensive task and has memory repercussions. Current algorithms, such as the temporal sequence pattern mining (tSPM) algorithm, are already providing promising outcomes, but still leave room for optimization. In this paper, we present the tSPM+ algorithm, a high-performance implementation of the tSPM algorithm, which adds a new dimension by adding the duration to the temporal patterns. We show that the tSPM+ algorithm provides a speed up to factor 980 and a up to 48 fold improvement in memory consumption. Moreover, we present a docker container with an R-package, We also provide vignettes for an easy integration into already existing machine learning workflows and use the mined temporal sequences to identify Post COVID-19 patients and their symptoms according to the WHO definition. | [] | Validation |
42,840 | 24 | Title: Imitation Learning from Nonlinear MPC via the Exact Q-Loss and its Gauss-Newton Approximation
Abstract: This work presents a novel loss function for learning nonlinear Model Predictive Control policies via Imitation Learning. Standard approaches to Imitation Learning neglect information about the expert and generally adopt a loss function based on the distance between expert and learned controls. In this work, we present a loss based on the Q-function directly embedding the performance objectives and constraint satisfaction of the associated Optimal Control Problem (OCP). However, training a Neural Network with the Q-loss requires solving the associated OCP for each new sample. To alleviate the computational burden, we derive a second Q-loss based on the Gauss-Newton approximation of the OCP resulting in a faster training time. We validate our losses against Behavioral Cloning, the standard approach to Imitation Learning, on the control of a nonlinear system with constraints. The final results show that the Q-function-based losses significantly reduce the amount of constraint violations while achieving comparable or better closed-loop costs. | [] | Train |
42,841 | 26 | Title: The Half-Life of a Tweet
Abstract: Twitter has started to share an impression count variable as part of the available public metrics for every Tweet collected with Twitter’s APIs. With the information about how often a particular Tweet has been shown to Twitter users at the time of data collection, we can learn important insights about the dissemination process of a Tweet by measuring its impression count repeatedly over time. With our preliminary analysis, we can show that on average the peak of impressions per second is 72 seconds after a Tweet was sent and that after 24 hours, no relevant number of impressions can be observed for ∼95% of all Tweets. Finally, we estimate that the median half-life of a Tweet, i.e. the time it takes before half of all impressions are created, is about 80 minutes. | [
23587,
7580
] | Train |
42,842 | 30 | Title: Med-HALT: Medical Domain Hallucination Test for Large Language Models
Abstract: This research paper focuses on the challenges posed by hallucinations in large language models (LLMs), particularly in the context of the medical domain. Hallucination, wherein these models generate plausible yet unverified or incorrect information, can have serious consequences in healthcare applications. We propose a new benchmark and dataset, Med-HALT (Medical Domain Hallucination Test), designed specifically to evaluate and reduce hallucinations. Med-HALT provides a diverse multinational dataset derived from medical examinations across various countries and includes multiple innovative testing modalities. Med-HALT includes two categories of tests reasoning and memory-based hallucination tests, designed to assess LLMs's problem-solving and information retrieval abilities. Our study evaluated leading LLMs, including Text Davinci, GPT-3.5, LlaMa-2, MPT, and Falcon, revealing significant differences in their performance. The paper provides detailed insights into the dataset, promoting transparency and reproducibility. Through this work, we aim to contribute to the development of safer and more reliable language models in healthcare. Our benchmark can be found at medhalt.github.io | [
15075,
13029,
109,
38102,
43641,
42747
] | Train |
42,843 | 26 | Title: Does society show differential attention to researchers based on gender and field?
Abstract: While not all researchers prioritize social impact, it is undeniably a crucial aspect that adds significance to their work. The objective of this paper is to explore potential gender differences in the social attention paid to researchers and to examine their association with specific fields of study. To achieve this goal, the paper analyzes four dimensions of social influence and examines three measures of social attention to researchers. The dimensions are media influence (mentions in mainstream news), political influence (mentions in public policy reports), social media influence (mentions in Twitter), and educational influence (mentions in Wikipedia). The measures of social attention to researchers are: proportion of publications with social mentions (social attention orientation), mentions per publication (level of social attention), and mentions per mentioned publication (intensity of social attention). By analyzing the rankings of authors -- for the four dimensions with the three measures in the 22 research fields of the Web of Science database -- and by using Spearman correlation coefficients, we conclude that: 1) significant differences are observed between fields; 2) the dimensions capture different and independent aspects of the social impact. Finally, we use non-parametric means comparison tests to detect gender bias in social attention. We conclude that for most fields and dimensions with enough non-zero altmetrics data, gender differences in social attention are not predominant, but are still present and vary across fields. | [] | Train |
42,844 | 24 | Title: Long-term drought prediction using deep neural networks based on geospatial weather data
Abstract: The accurate prediction of drought probability in specific regions is crucial for informed decision-making in agricultural practices. It is important to make predictions one year in advance, particularly for long-term decisions. However, forecasting this probability presents challenges due to the complex interplay of various factors within the region of interest and neighboring areas. In this study, we propose an end-to-end solution to address this issue based on various spatiotemporal neural networks. The models considered focus on predicting the drought intensity based on the Palmer Drought Severity Index (PDSI) for subregions of interest, leveraging intrinsic factors and insights from climate models to enhance drought predictions. Comparative evaluations demonstrate the superior accuracy of Convolutional LSTM (ConvLSTM) and transformer models compared to baseline gradient boosting and logistic regression solutions. The two former models achieved impressive ROC AUC scores from 0.90 to 0.70 for forecast horizons from one to six months, outperforming baseline models. The transformer showed superiority for shorter horizons, while ConvLSTM did so for longer horizons. Thus, we recommend selecting the models accordingly for long-term drought forecasting. To ensure the broad applicability of the considered models, we conduct extensive validation across regions worldwide, considering different environmental conditions. We also run several ablation and sensitivity studies to challenge our findings and provide additional information on how to solve the problem. | [] | Train |
42,845 | 3 | Title: Response to "The digital pound: a new form of money for households and businesses"
Abstract: This document constitutes a response to a Consultation Paper published by the Bank of England and HM Treasury,"The digital pound: a new form of money for households and businesses?", the latest document in a series that includes"Central Bank Digital Currency: opportunities, challenges and design"in 2020 and"New forms of digital money"in 2021. The Consultation Paper concerns the adoption of central bank digital currency (CBDC) for retail use in the United Kingdom by the Bank of England. We shall address the consultation questions directly in the third section of this document. | [
28434,
10404,
31957
] | Train |
42,846 | 4 | Title: Digital Privacy Under Attack: Challenges and Enablers
Abstract: Users have renewed interest in protecting their private data in the digital space. When they don't believe that their privacy is sufficiently covered by one platform, they will readily switch to another. Such an increasing level of privacy awareness has made privacy preservation an essential research topic. Nevertheless, new privacy attacks are emerging day by day. Therefore, a holistic survey to compare the discovered techniques on attacks over privacy preservation and their mitigation schemes is essential in the literature. We develop a study to fill this gap by assessing the resilience of privacy-preserving methods to various attacks and conducting a comprehensive review of countermeasures from a broader perspective. First, we introduce the fundamental concepts and critical components of privacy attacks. Second, we comprehensively cover major privacy attacks targeted at anonymous data, statistical aggregate data, and privacy-preserving models. We also summarize popular countermeasures to mitigate these attacks. Finally, some promising future research directions and related issues in the privacy community are envisaged. We believe this survey will successfully shed some light on privacy research and encourage researchers to entirely understand the resilience of different existing privacy-preserving approaches. | [] | Test |
42,847 | 16 | Title: FeatAug-DETR: Enriching One-to-Many Matching for DETRs with Feature Augmentation
Abstract: One-to-one matching is a crucial design in DETR-like object detection frameworks. It enables the DETR to perform end-to-end detection. However, it also faces challenges of lacking positive sample supervision and slow convergence speed. Several recent works proposed the one-to-many matching mechanism to accelerate training and boost detection performance. We revisit these methods and model them in a unified format of augmenting the object queries. In this paper, we propose two methods that realize one-to-many matching from a different perspective of augmenting images or image features. The first method is One-to-many Matching via Data Augmentation (denoted as DataAug-DETR). It spatially transforms the images and includes multiple augmented versions of each image in the same training batch. Such a simple augmentation strategy already achieves one-to-many matching and surprisingly improves DETR's performance. The second method is One-to-many matching via Feature Augmentation (denoted as FeatAug-DETR). Unlike DataAug-DETR, it augments the image features instead of the original images and includes multiple augmented features in the same batch to realize one-to-many matching. FeatAug-DETR significantly accelerates DETR training and boosts detection performance while keeping the inference speed unchanged. We conduct extensive experiments to evaluate the effectiveness of the proposed approach on DETR variants, including DAB-DETR, Deformable-DETR, and H-Deformable-DETR. Without extra training data, FeatAug-DETR shortens the training convergence periods of Deformable-DETR to 24 epochs and achieves 58.3 AP on COCO val2017 set with Swin-L as the backbone. | [] | Train |
42,848 | 16 | Title: Multimodal Prompt Retrieval for Generative Visual Question Answering
Abstract: Recent years have witnessed impressive results of pre-trained vision-language models on knowledge-intensive tasks such as visual question answering (VQA). Despite the recent advances in VQA, existing methods mainly adopt a discriminative formulation that predicts answers within a pre-defined label set, leading to easy overfitting on low-resource domains with limited labeled data (e.g., medicine) and poor generalization under domain shift to another dataset. To tackle this limitation, we propose a novel generative model enhanced by multimodal prompt retrieval (MPR) that integrates retrieved prompts and multimodal features to generate answers in free text. Our generative model enables rapid zero-shot dataset adaptation to unseen data distributions and open-set answer labels across datasets. Our experiments on medical VQA tasks show that MPR outperforms its non-retrieval counterpart by up to 30% accuracy points in a few-shot domain adaptation setting. | [] | Train |
42,849 | 10 | Title: Thespian: Multi-Character Text Role-Playing Game Agents
Abstract: Text-adventure games and text role-playing games are grand challenges for reinforcement learning game playing agents. Text role-playing games are open-ended environments where an agent must faithfully play a particular character. We consider the distinction between characters and actors, where an actor agent has the ability to play multiple characters. We present a framework we call a thespian agent that can learn to emulate multiple characters along with a soft prompt that can be used to direct it as to which character to play at any time. We further describe an attention mechanism that allows the agent to learn new characters that are based on previously learned characters in a few-shot fashion. We show that our agent outperforms the state of the art agent framework in multi-character learning and few-shot learning. | [
16379
] | Train |
42,850 | 8 | Title: Multimodal Multiple Federated Feature Construction Method for IoT Environments
Abstract: The fast development of Internet-of-Things (IoT) devices and applications has led to vast data collection, potentially containing irrelevant, noisy, or redundant features that degrade learning model performance. These collected data can be processed on either end-user devices (clients) or edge/cloud server. Feature construction is a pre-processing technique that can generate discriminative features and reveal hidden relationships between original features within a dataset, leading to improved performance and reduced computational complexity of learning models. Moreover, the communication cost between clients and edge/cloud server can be minimized in situations where a dataset needs to be transmitted for further processing. In this paper, the first federated feature construction (FFC) method called multimodal multiple FFC (MMFFC) is proposed by using multimodal optimization and gravitational search programming algorithm. This is a collaborative method for constructing multiple high-level features without sharing clients' datasets to enhance the trade-off between accuracy of the trained model and overall communication cost of the system, while also reducing computational complexity of the learning model. We analyze and compare the accuracy-cost trade-off of two scenarios, namely, 1) MMFFC federated learning (FL), using vanilla FL with pre-processed datasets on clients and 2) MMFFC centralized learning, transferring pre-processed datasets to an edge server and using centralized learning model. The results on three datasets for the first scenario and eight datasets for the second one demonstrate that the proposed method can reduce the size of datasets for about 60\(\%\), thereby reducing communication cost and improving accuracy of the learning models tested on almost all datasets. | [
5648
] | Test |
42,851 | 1 | Title: Transcoding Quality Prediction for Adaptive Video Streaming
Abstract: In recent years, video streaming applications have proliferated the demand for Video Quality Assessment (VQA). Reduced reference video quality assessment (RR-VQA) is a category of VQA where certain features (e.g., texture, edges) of the original video are provided for quality assessment. It is a popular research area for various applications such as social media, online games, and video streaming. This paper introduces a reduced reference Transcoding Quality Prediction Model (TQPM) to determine the visual quality score of the video possibly transcoded in multiple stages. The quality is predicted using Discrete Cosine Transform (DCT)-energy-based features of the video (i.e., the video's brightness, spatial texture information, and temporal activity) and the target bitrate representation of each transcoding stage. To do that, the problem is formulated, and a Long Short-Term Memory (LSTM)-based quality prediction model is presented. Experimental results illustrate that, on average, TQPM yields PSNR, SSIM, and VMAF predictions with an R2 score of 0.83, 0.85, and 0.87, respectively, and Mean Absolute Error (MAE) of 1.31 dB, 1.19 dB, and 3.01, respectively, for single-stage transcoding. Furthermore, an R2 score of 0.84, 0.86, and 0.91, respectively, and MAE of 1.32 dB, 1.33 dB, and 3.25, respectively, are observed for a two-stage transcoding scenario. Moreover, the average processing time of TQPM for 4s segments is 0.328s, making it a practical VQA method in online streaming applications. | [
39714,
18036,
31207
] | Train |
42,852 | 8 | Title: Real-Time High-Resolution Pedestrian Detection in Crowded Scenes via Parallel Edge Offloading
Abstract: —To identify dense and small-size pedestrians in surveillance systems, high-resolution cameras are widely deployed, where high-resolution images are captured and delivered to off- the-shelf pedestrian detection models. However, given the highly computation-intensive workload brought by the high resolution, the resource-constrained cameras fail to afford accurate inference in real time. To address that, we propose H ODE , an offloaded video analytic framework that utilizes multiple edge nodes in proximity to expedite pedestrian detection with high-resolution inputs. Specifically, H ODE can intelligently split high-resolution images into respective regions and then offload them to distributed edge nodes to perform pedestrian detection in parallel. A spatio-temporal flow filtering method is designed to enable context-aware region partitioning, as well as a DRL-based scheduling algorithm to allow accuracy-aware load balance among heterogeneous edge nodes. Extensive evaluation results using realistic prototypes show that H ODE can achieve up to 2.01× speedup with very mild accuracy loss. | [] | Train |
42,853 | 24 | Title: Two-step counterfactual generation for OOD examples
Abstract: Two fundamental requirements for the deployment of machine learning models in safety-critical systems are to be able to detect out-of-distribution (OOD) data correctly and to be able to explain the prediction of the model. Although significant effort has gone into both OOD detection and explainable AI, there has been little work on explaining why a model predicts a certain data point is OOD. In this paper, we address this question by introducing the concept of an OOD counterfactual, which is a perturbed data point that iteratively moves between different OOD categories. We propose a method for generating such counterfactuals, investigate its application on synthetic and benchmark data, and compare it to several benchmark methods using a range of metrics. | [] | Test |
42,854 | 16 | Title: MixFormerV2: Efficient Fully Transformer Tracking
Abstract: Transformer-based trackers have achieved strong accuracy on the standard benchmarks. However, their efficiency remains an obstacle to practical deployment on both GPU and CPU platforms. In this paper, to overcome this issue, we propose a fully transformer tracking framework, coined as \emph{MixFormerV2}, without any dense convolutional operation and complex score prediction module. Our key design is to introduce four special prediction tokens and concatenate them with the tokens from target template and search areas. Then, we apply the unified transformer backbone on these mixed token sequence. These prediction tokens are able to capture the complex correlation between target template and search area via mixed attentions. Based on them, we can easily predict the tracking box and estimate its confidence score through simple MLP heads. To further improve the efficiency of MixFormerV2, we present a new distillation-based model reduction paradigm, including dense-to-sparse distillation and deep-to-shallow distillation. The former one aims to transfer knowledge from the dense-head based MixViT to our fully transformer tracker, while the latter one is used to prune some layers of the backbone. We instantiate two types of MixForemrV2, where the MixFormerV2-B achieves an AUC of 70.6\% on LaSOT and an AUC of 57.4\% on TNL2k with a high GPU speed of 165 FPS, and the MixFormerV2-S surpasses FEAR-L by 2.7\% AUC on LaSOT with a real-time CPU speed. | [
22680,
26484
] | Test |
42,855 | 16 | Title: SwinFace: A Multi-task Transformer for Face Recognition, Expression Recognition, Age Estimation and Attribute Estimation
Abstract: In recent years, vision transformers have been introduced into face recognition and analysis and have achieved performance breakthroughs. However, most previous methods generally train a single model or an ensemble of models to perform the desired task, which ignores the synergy among different tasks and fails to achieve improved prediction accuracy, increased data efficiency, and reduced training time. This paper presents a multi-purpose algorithm for simultaneous face recognition, facial expression recognition, age estimation, and face attribute estimation (40 attributes including gender) based on a single Swin Transformer. Our design, the SwinFace, consists of a single shared backbone together with a subnet for each set of related tasks. To address the conflicts among multiple tasks and meet the different demands of tasks, a Multi-Level Channel Attention (MLCA) module is integrated into each task-specific analysis subnet, which can adaptively select the features from optimal levels and channels to perform the desired tasks. Extensive experiments show that the proposed model has a better understanding of the face and achieves excellent performance for all tasks. Especially, it achieves 90.97% accuracy on RAF-DB and 0.22 $\epsilon$-error on CLAP2015, which are state-of-the-art results on facial expression recognition and age estimation respectively. The code and models will be made publicly available at https://github.com/lxq1000/SwinFace. | [] | Train |
42,856 | 10 | Title: Testing System Intelligence
Abstract: We discuss the adequacy of tests for intelligent systems and practical problems raised by their implementation. We propose the replacement test as the ability of a system to replace successfully another system performing a task in a given context. We show how it can characterize salient aspects of human intelligence that cannot be taken into account by the Turing test. We argue that building intelligent systems passing the replacement test involves a series of technical problems that are outside the scope of current AI. We present a framework for implementing the proposed test and validating the properties of the intelligent systems. We discuss the inherent limitations of intelligent system validation and advocate new theoretical foundations for extending existing rigorous test methods. We suggest that the replacement test, based on the complementarity of skills between human and machine, can lead to a multitude of intelligence concepts reflecting the ability to combine data-based and symbolic knowledge to varying degrees. | [
417,
5078
] | Train |
42,857 | 16 | Title: Slot-VAE: Object-Centric Scene Generation with Slot Attention
Abstract: Slot attention has shown remarkable object-centric representation learning performance in computer vision tasks without requiring any supervision. Despite its object-centric binding ability brought by compositional modelling, as a deterministic module, slot attention lacks the ability to generate novel scenes. In this paper, we propose the Slot-VAE, a generative model that integrates slot attention with the hierarchical VAE framework for object-centric structured scene generation. For each image, the model simultaneously infers a global scene representation to capture high-level scene structure and object-centric slot representations to embed individual object components. During generation, slot representations are generated from the global scene representation to ensure coherent scene structures. Our extensive evaluation of the scene generation ability indicates that Slot-VAE outperforms slot representation-based generative baselines in terms of sample quality and scene structure accuracy. | [
22010
] | Train |
42,858 | 10 | Title: PASTA: Pretrained Action-State Transformer Agents
Abstract: Self-supervised learning has brought about a revolutionary paradigm shift in various computing domains, including NLP, vision, and biology. Recent approaches involve pre-training transformer models on vast amounts of unlabeled data, serving as a starting point for efficiently solving downstream tasks. In the realm of reinforcement learning, researchers have recently adapted these approaches by developing models pre-trained on expert trajectories, enabling them to address a wide range of tasks, from robotics to recommendation systems. However, existing methods mostly rely on intricate pre-training objectives tailored to specific downstream applications. This paper presents a comprehensive investigation of models we refer to as Pretrained Action-State Transformer Agents (PASTA). Our study uses a unified methodology and covers an extensive set of general downstream tasks including behavioral cloning, offline RL, sensor failure robustness, and dynamics change adaptation. Our goal is to systematically compare various design choices and provide valuable insights to practitioners for building robust models. Key highlights of our study include tokenization at the action and state component level, using fundamental pre-training objectives like next token prediction, training models across diverse domains simultaneously, and using parameter efficient fine-tuning (PEFT). The developed models in our study contain fewer than 10 million parameters and the application of PEFT enables fine-tuning of fewer than 10,000 parameters during downstream adaptation, allowing a broad community to use these models and reproduce our experiments. We hope that this study will encourage further research into the use of transformers with first-principles design choices to represent RL trajectories and contribute to robust policy learning. | [
23970,
13700,
24196,
19662,
35250,
13564
] | Train |
42,859 | 18 | Title: Quantum Communication in 6G Satellite Networks: Entanglement Distribution Across Changing Topologies
Abstract: As LEO/VLEO satellites offer many attractive features, such as low transmission delay, they are expected to be an integral part of 6G. Global entanglement distribution over LEO and VLEO satellites network must reckon with satellite movement over time. Current studies do not fully capture the dynamic nature of satellite constellations. We model a dynamic LEO/VLEO satellite network as a time-varying graph and construct a sequence of static graphs to represent a dynamic network. We study the entanglement distribution problem between a set of source-destination node pairs in this dynamic network utilizing Multi-commodity Flow (MCF). Solving MCF over a sequence of graphs independently for each graph may produce a completely different set of paths. Changing the set of paths every time the graph topology changes may involve a significant amount of overhead, as an established set of paths must be taken down and a new set of paths established. We propose a technique that will avoid this overhead by computing only one set of paths P to be used over all the graphs in the sequence. The degraded performance offered by P may be viewed as the cost of using P. The benefit of using P is the overhead cost of path switching that can be avoided. We provide a cost-benefit analysis in a LEO/VLEO constellation for entanglement distribution between multiple source-destination pairs. Our extensive experimentation shows that a significant amount of savings in overhead can be achieved if one is willing to accept a slightly degraded performance. | [
29327
] | Train |
42,860 | 4 | Title: A Secure Third-Party Auditing Scheme Based on Blockchain Technology in Cloud Storage
Abstract: With the help of a shared pool of reconfigurable computing resources, clients of the cloud-based model can keep sensitive data remotely and access the apps and services it offers on-demand without having to worry about maintaining and storing it locally. To protect the privacy of the public auditing system that supports the cloud data exchange system. The data's owner has the ability to change it using the private key and publishes it in the cloud. The RSA Technique is used to produce key codes for the cloud services atmosphere's privacy utilizing the system's baseboard number, disc number, and client passcode for validation. The method is based on a cutting-edge User End Generated (UEG) privacy technique that minimizes the involvement of a third party and improves security checks by automatically documenting destructive activities. To strengthen extensibility, various authorization-assigning modalities and block access patterns were established together with current operational design approaches. In order to meet the demands for decentralization, fine-grained auditability, extensibility, flexibility, and privacy protection for multilevel data access in networked environments, the suggested approach makes use of blockchain technology. According to a thorough performance and security assessment, the current proposal is exceptionally safe and effective. | [
5154,
25547,
28036,
10838
] | Train |
42,861 | 4 | Title: Physical Zero-Knowledge Proof for Ball Sort Puzzle
Abstract: nan | [
5478,
42575
] | Train |
42,862 | 3 | Title: Towards a Contemporary Definition of Cybersecurity
Abstract: The report provides an intricate analysis of cyber security defined in contemporary operational digital environments. An extensive literature review is formed to determine how the construct is reviewed in modern scholarly contexts. The article seeks to offer a comprehensive definition of the term"cybersecurity"to accentuate its multidisciplinary perspectives. A meaningful concise, and inclusive dimension will be provided to assist in designing scholarly discourse on the subject. The report will offer a unified framework for examining activities that constitute the concept resulting in a new definition;"Cybersecurity is the collection and concerting of resources including personnel and infrastructure, structures, and processes to protect networks and cyber-enabled computer systems from events that compromise the integrity and interfere with property rights, resulting in some extent of the loss."The encapsulation of the interdisciplinary domains will be critical in improving understanding and response to emerging challenges in cyberspace. | [] | Validation |
42,863 | 25 | Title: A Deep Learning Architecture with Spatio-Temporal Focusing for Detecting Respiratory Anomalies
Abstract: This paper presents a deep learning system applied for detecting anomalies from respiratory sound recordings. Our system initially performs audio feature extraction using Continuous Wavelet transformation. This transformation converts the respiratory sound input into a two-dimensional spectrogram where both spectral and temporal features are presented. Then, our proposed deep learning architecture inspired by the Inception-residual-based backbone performs the spatial-temporal focusing and multi-head attention mechanism to classify respiratory anomalies. In this work, we evaluate our proposed models on the benchmark SPRSound (The Open-Source SJTU Paediatric Respiratory Sound) database proposed by the IEEE BioCAS 2023 challenge. As regards the Score computed by an average between the average score and harmonic score, our robust system has achieved Top-1 performance with Scores of 0.810, 0.667, 0.744, and 0.608 in Tasks 1-1, 1-2, 2-1, and 2-2, respectively. | [] | Validation |
42,864 | 24 | Title: ODIN: On-demand Data Formulation to Mitigate Dataset Lock-in
Abstract: ODIN is an innovative approach that addresses the problem of dataset constraints by integrating generative AI models. Traditional zero-shot learning methods are constrained by the training dataset. To fundamentally overcome this limitation, ODIN attempts to mitigate the dataset constraints by generating on-demand datasets based on user requirements. ODIN consists of three main modules: a prompt generator, a text-to-image generator, and an image post-processor. To generate high-quality prompts and images, we adopted a large language model (e.g., ChatGPT), and a text-to-image diffusion model (e.g., Stable Diffusion), respectively. We evaluated ODIN on various datasets in terms of model accuracy and data diversity to demonstrate its potential, and conducted post-experiments for further investigation. Overall, ODIN is a feasible approach that enables Al to learn unseen knowledge beyond the training dataset. | [
10624
] | Train |
42,865 | 24 | Title: An Introduction to Bi-level Optimization: Foundations and Applications in Signal Processing and Machine Learning
Abstract: Recently, bi-level optimization (BLO) has taken center stage in some very exciting developments in the area of signal processing (SP) and machine learning (ML). Roughly speaking, BLO is a classical optimization problem that involves two levels of hierarchy (i.e., upper and lower levels), wherein obtaining the solution to the upper-level problem requires solving the lower-level one. BLO has become popular largely because it is powerful in modeling problems in SP and ML, among others, that involve optimizing nested objective functions. Prominent applications of BLO range from resource allocation for wireless systems to adversarial machine learning. In this work, we focus on a class of tractable BLO problems that often appear in SP and ML applications. We provide an overview of some basic concepts of this class of BLO problems, such as their optimality conditions, standard algorithms (including their optimization principles and practical implementations), as well as how they can be leveraged to obtain state-of-the-art results for a number of key SP and ML applications. Further, we discuss some recent advances in BLO theory, its implications for applications, and point out some limitations of the state-of-the-art that require significant future research efforts. Overall, we hope that this article can serve to accelerate the adoption of BLO as a generic tool to model, analyze, and innovate on a wide array of emerging SP and ML applications. | [
38019,
39261,
7574
] | Test |
42,866 | 16 | Title: On the Benefits of 3D Pose and Tracking for Human Action Recognition
Abstract: In this work we study the benefits of using tracking and 3D poses for action recognition. To achieve this, we take the Lagrangian view on analysing actions over a trajectory of human motion rather than at a fixed point in space. Taking this stand allows us to use the tracklets of people to predict their actions. In this spirit, first we show the benefits of using 3D pose to infer actions, and study person-person in-teractions. Subsequently, we propose a Lagrangian Action Recognition model by fusing 3D pose and contextualized appearance over tracklets. To this end, our method achieves state-of-the-art performance on the AVA v2.2 dataset on both pose only settings and on standard benchmark settings. When reasoning about the action using only pose cues, our pose model achieves +10.0 mAP gain over the corresponding state-of-the-art while our fused model has a gain of +2.8 mAP over the best state-of-the-art model. Code and results are available at: https://brjathu.github.io/LART | [
5632,
15666,
35242,
19882
] | Train |
42,867 | 24 | Title: Towards Complex Dynamic Physics System Simulation with Graph Neural ODEs
Abstract: The great learning ability of deep learning models facilitates us to comprehend the real physical world, making learning to simulate complicated particle systems a promising endeavour. However, the complex laws of the physical world pose significant challenges to the learning based simulations, such as the varying spatial dependencies between interacting particles and varying temporal dependencies between particle system states in different time stamps, which dominate particles' interacting behaviour and the physical systems' evolution patterns. Existing learning based simulation methods fail to fully account for the complexities, making them unable to yield satisfactory simulations. To better comprehend the complex physical laws, this paper proposes a novel learning based simulation model- Graph Networks with Spatial-Temporal neural Ordinary Equations (GNSTODE)- that characterizes the varying spatial and temporal dependencies in particle systems using a united end-to-end framework. Through training with real-world particle-particle interaction observations, GNSTODE is able to simulate any possible particle systems with high precisions. We empirically evaluate GNSTODE's simulation performance on two real-world particle systems, Gravity and Coulomb, with varying levels of spatial and temporal dependencies. The results show that the proposed GNSTODE yields significantly better simulations than state-of-the-art learning based simulation methods, which proves that GNSTODE can serve as an effective solution to particle simulations in real-world application. | [
36314,
32869
] | Train |
42,868 | 16 | Title: Confidence-aware calibration and scoring functions for curriculum learning
Abstract: Despite the great success of state-of-the-art deep neural networks, several studies have reported models to be over-confident in predictions, indicating miscalibration. Label Smoothing has been proposed as a solution to the over-confidence problem and works by softening hard targets during training, typically by distributing part of the probability mass from a ‘one-hot’ label uniformly to all other labels. However, neither model nor human confidence in a label are likely to be uniformly distributed in this manner, with some labels more likely to be confused than others. In this paper we integrate notions of model confidence and human confidence with label smoothing, respectively Model Confidence LS and Human Confidence LS, to achieve better model calibration and generalization. To enhance model generalization, we show how our model and human confidence scores can be successfully applied to curriculum learning, a training strategy inspired by learning of ‘easier to harder’ tasks. A higher model or human confidence score indicates a more recognisable and therefore easier sample, and can therefore be used as a scoring function to rank samples in curriculum learning. We evaluate our proposed methods with four state-of-the-art architectures for image and text classification task, using datasets with multi-rater label annotations by humans. We report that integrating model or human confidence information in label smoothing and curriculum learning improves both model performance and model calibration. The code are available at https://github.com/AoShuang92/Confidence Calibration CL. | [] | Train |
42,869 | 16 | Title: Understanding Prompt Tuning for V-L Models Through the Lens of Neural Collapse
Abstract: Large-scale vision-language (V-L) models have demonstrated remarkable generalization capabilities for downstream tasks through prompt tuning. However, the mechanisms behind the learned text representations are unknown, limiting further generalization gains, especially under class imbalance scenarios. Recent advances in the neural collapse (NC) phenomenon of vision-only models suggest that the optimal representation structure is the simplex ETF, which paves the way to study representations in V-L models. In this paper, we make the first attempt to use NC for examining the representations in V-L models via prompt tuning. It is found that NC optimality of text-to-image representations shows a positive correlation with downstream generalizability, which is more severe under class imbalance settings. To improve the representations, we propose Neural-collapse-anchored Prompt Tuning (NPT), a novel method that learns prompts with text and image representations that satisfy the same simplex ETF. NPT incorporates two regularization terms: language-modality collapse and multi-modality isomorphism; and it is compatible with other prompt tuning methods. Extensive experiments show that NPT can consistently help to improve existing prompt tuning techniques across 11 datasets for both balanced and imbalanced settings. | [
27209,
11156,
6556,
42878
] | Train |
42,870 | 30 | Title: Uncertainty-Aware Unlikelihood Learning Improves Generative Aspect Sentiment Quad Prediction
Abstract: Recently, aspect sentiment quad prediction has received widespread attention in the field of aspect-based sentiment analysis. Existing studies extract quadruplets via pre-trained generative language models to paraphrase the original sentence into a templated target sequence. However, previous works only focus on what to generate but ignore what not to generate. We argue that considering the negative samples also leads to potential benefits. In this work, we propose a template-agnostic method to control the token-level generation, which boosts original learning and reduces mistakes simultaneously. Specifically, we introduce Monte Carlo dropout to understand the built-in uncertainty of pre-trained language models, acquiring the noises and errors. We further propose marginalized unlikelihood learning to suppress the uncertainty-aware mistake tokens. Finally, we introduce minimization entropy to balance the effects of marginalized unlikelihood learning. Extensive experiments on four public datasets demonstrate the effectiveness of our approach on various generation templates. | [
22525
] | Test |
42,871 | 24 | Title: Neuro-Symbolic Bi-Directional Translation - Deep Learning Explainability for Climate Tipping Point Research
Abstract: In recent years, there has been an increase in using deep learning for climate and weather modeling. Though results have been impressive, explainability and interpretability of deep learning models are still a challenge. A third wave of Artificial Intelligence (AI), which includes logic and reasoning, has been described as a way to address these issues. Neuro-symbolic AI is a key component of this integration of logic and reasoning with deep learning. In this work we propose a neuro-symbolic approach called Neuro-Symbolic Question-Answer Program Translator, or NS-QAPT, to address explainability and interpretability for deep learning climate simulation, applied to climate tipping point discovery. The NS-QAPT method includes a bidirectional encoder-decoder architecture that translates between domain-specific questions and executable programs used to direct the climate simulation, acting as a bridge between climate scientists and deep learning models. We show early compelling results of this translation method and introduce a domain-specific language and associated executable programs for a commonly known tipping point, the collapse of the Atlantic Meridional Overturning Circulation (AMOC). | [
17857,
34149
] | Train |
42,872 | 16 | Title: Inferring High-level Geographical Concepts via Knowledge Graph and Multi-scale Data Integration: A Case Study of C-shaped Building Pattern Recognition
Abstract: Effective building pattern recognition is critical for understanding urban form, automating map generalization, and visualizing 3D city models. Most existing studies use object-independent methods based on visual perception rules and proximity graph models to extract patterns. However, because human vision is a part-based system, pattern recognition may require decomposing shapes into parts or grouping them into clusters. Existing methods may not recognize all visually aware patterns, and the proximity graph model can be inefficient. To improve efficiency and effectiveness, we integrate multi-scale data using a knowledge graph, focusing on the recognition of C-shaped building patterns. First, we use a property graph to represent the relationships between buildings within and across different scales involved in C-shaped building pattern recognition. Next, we store this knowledge graph in a graph database and convert the rules for C-shaped pattern recognition and enrichment into query conditions. Finally, we recognize and enrich C-shaped building patterns using rule-based reasoning in the built knowledge graph. We verify the effectiveness of our method using multi-scale data with three levels of detail (LODs) collected from the Gaode Map. Our results show that our method achieves a higher recall rate of 26.4% for LOD1, 20.0% for LOD2, and 9.1% for LOD3 compared to existing approaches. We also achieve recognition efficiency improvements of 0.91, 1.37, and 9.35 times, respectively. | [] | Train |
42,873 | 4 | Title: Secure compilation of rich smart contracts on poor UTXO blockchains
Abstract: Most blockchain platforms from Ethereum onwards render smart contracts as stateful reactive objects that update their state and transfer crypto-assets in response to transactions. In this way, they support the development of contracts in the imperative procedural paradigm, familiar to most programmers. A drawback of this design choice is that when a user submits a transaction, they cannot predict in which state it will be executed, exposing them to transaction-ordering attacks. The UTXO model is an alternative blockchain design that thwarts these attacks by requiring new transactions to spend past ones: since transactions have unique identifiers, reordering attacks are ineffective. Currently, the blockchains following the UTXO model either provide contracts with limited expressiveness (Bitcoin), or require complex run-time environments and unfamiliar programming abstractions (Cardano). We present a framework for smart contracts in the UTXO model, that allows expressive contracts to be securely executed by bare-bone UTXO blockchains with loop-free scripts enriched with covenants, and supports the familiar procedural programming style. | [] | Train |
42,874 | 16 | Title: Not All Steps are Created Equal: Selective Diffusion Distillation for Image Manipulation
Abstract: Conditional diffusion models have demonstrated impressive performance in image manipulation tasks. The general pipeline involves adding noise to the image and then denoising it. However, this method faces a trade-off problem: adding too much noise affects the fidelity of the image while adding too little affects its editability. This largely limits their practical applicability. In this paper, we propose a novel framework, Selective Diffusion Distillation (SDD), that ensures both the fidelity and editability of images. Instead of directly editing images with a diffusion model, we train a feedforward image manipulation network under the guidance of the diffusion model. Besides, we propose an effective indicator to select the semantic-related timestep to obtain the correct semantic guidance from the diffusion model. This approach successfully avoids the dilemma caused by the diffusion process. Our extensive experiments demonstrate the advantages of our framework. Code is released at https://github.com/AndysonYs/Selective-Diffusion-Distillation. | [] | Train |
42,875 | 16 | Title: Diffusion Probabilistic Models for Scene-Scale 3D Categorical Data
Abstract: In this paper, we learn a diffusion model to generate 3D data on a scene-scale. Specifically, our model crafts a 3D scene consisting of multiple objects, while recent diffusion research has focused on a single object. To realize our goal, we represent a scene with discrete class labels, i.e., categorical distribution, to assign multiple objects into semantic categories. Thus, we extend discrete diffusion models to learn scene-scale categorical distributions. In addition, we validate that a latent diffusion model can reduce computation costs for training and deploying. To the best of our knowledge, our work is the first to apply discrete and latent diffusion for 3D categorical data on a scene-scale. We further propose to perform semantic scene completion (SSC) by learning a conditional distribution using our diffusion model, where the condition is a partial observation in a sparse point cloud. In experiments, we empirically show that our diffusion models not only generate reasonable scenes, but also perform the scene completion task better than a discriminative model. Our code and models are available at https://github.com/zoomin-lee/scene-scale-diffusion | [
8308
] | Train |
42,876 | 24 | Title: Identifying acute illness phenotypes via deep temporal interpolation and clustering network on physiologic signatures
Abstract: Initial hours of hospital admission impact clinical trajectory, but early clinical decisions often suffer due to data paucity. With clustering analysis for vital signs within six hours of admission, patient phenotypes with distinct pathophysiological signatures and outcomes may support early clinical decisions. We created a single-center, longitudinal EHR dataset for 75,762 adults admitted to a tertiary care center for 6+ hours. We proposed a deep temporal interpolation and clustering network to extract latent representations from sparse, irregularly sampled vital sign data and derived distinct patient phenotypes in a training cohort (n=41,502). Model and hyper-parameters were chosen based on a validation cohort (n=17,415). Test cohort (n=16,845) was used to analyze reproducibility and correlation with biomarkers. The training, validation, and testing cohorts had similar distributions of age (54-55 yrs), sex (55% female), race, comorbidities, and illness severity. Four clusters were identified. Phenotype A (18%) had most comorbid disease with higher rate of prolonged respiratory insufficiency, acute kidney injury, sepsis, and three-year mortality. Phenotypes B (33%) and C (31%) had diffuse patterns of mild organ dysfunction. Phenotype B had favorable short-term outcomes but second-highest three-year mortality. Phenotype C had favorable clinical outcomes. Phenotype D (17%) had early/persistent hypotension, high rate of early surgery, and substantial biomarker rate of inflammation but second-lowest three-year mortality. After comparing phenotypes' SOFA scores, clustering results did not simply repeat other acuity assessments. In a heterogeneous cohort, four phenotypes with distinct categories of disease and outcomes were identified by a deep temporal interpolation and clustering network. This tool may impact triage decisions and clinical decision-support under time constraints. | [] | Train |
42,877 | 10 | Title: The Two Faces of AI in Green Mobile Computing: A Literature Review
Abstract: Artificial intelligence is bringing ever new functionalities to the realm of mobile devices that are now considered essential (e.g., camera and voice assistants, recommender systems). Yet, operating artificial intelligence takes up a substantial amount of energy. However, artificial intelligence is also being used to enable more energy-efficient solutions for mobile systems. Hence, artificial intelligence has two faces in that regard, it is both a key enabler of desired (efficient) mobile functionalities and a major power draw on these devices, playing a part in both the solution and the problem. In this paper, we present a review of the literature of the past decade on the usage of artificial intelligence within the realm of green mobile computing. From the analysis of 34 papers, we highlight the emerging patterns and map the field into 13 main topics that are summarized in details. Our results showcase that the field is slowly increasing in the past years, more specifically, since 2019. Regarding the double impact AI has on the mobile energy consumption, the energy consumption of AI-based mobile systems is under-studied in comparison to the usage of AI for energy-efficient mobile computing, and we argue for more exploratory studies in that direction. We observe that although most studies are framed as solution papers (94%), the large majority do not make those solutions publicly available to the community. Moreover, we also show that most contributions are purely academic (28 out of 34 papers) and that we need to promote the involvement of the mobile software industry in this field. | [
43424
] | Train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.