node_id int64 0 76.9k | label int64 0 39 | text stringlengths 13 124k | neighbors listlengths 0 3.32k | mask stringclasses 4
values |
|---|---|---|---|---|
43,778 | 26 | Title: Asymmetry of social interactions and its role in link predictability: The case of coauthorship networks
Abstract: nan | [] | Train |
43,779 | 24 | Title: The Implicit Bias of Minima Stability in Multivariate Shallow ReLU Networks
Abstract: We study the type of solutions to which stochastic gradient descent converges when used to train a single hidden-layer multivariate ReLU network with the quadratic loss. Our results are based on a dynamical stability analysis. In the univariate case, it was shown that linearly stable minima correspond to network functions (predictors), whose second derivative has a bounded weighted $L^1$ norm. Notably, the bound gets smaller as the step size increases, implying that training with a large step size leads to `smoother' predictors. Here we generalize this result to the multivariate case, showing that a similar result applies to the Laplacian of the predictor. We demonstrate the tightness of our bound on the MNIST dataset, and show that it accurately captures the behavior of the solutions as a function of the step size. Additionally, we prove a depth separation result on the approximation power of ReLU networks corresponding to stable minima of the loss. Specifically, although shallow ReLU networks are universal approximators, we prove that stable shallow networks are not. Namely, there is a function that cannot be well-approximated by stable single hidden-layer ReLU networks trained with a non-vanishing step size. This is while the same function can be realized as a stable two hidden-layer ReLU network. Finally, we prove that if a function is sufficiently smooth (in a Sobolev sense) then it can be approximated arbitrarily well using shallow ReLU networks that correspond to stable solutions of gradient descent. | [] | Train |
43,780 | 22 | Title: Proceedings 14th Workshop on Programming Language Approaches to Concurrency and Communication-cEntric Software, PLACES@ETAPS 2023, Paris, France, 22 April 2023
Abstract: This volume contains the proceedings of PLACES 2023, the 14th edition of the Workshop on Programming Language Approaches to Concurrency and Communication-cEntric Software. The PLACES workshop series offers a forum for researchers from different fields to exchange new ideas about the challenges of modern and future programming, where concurrency and distribution are the norm rather than a marginal concern. PLACES 2023 was held on 22 April 2023 in Paris, France. The programme included keynote talks by Marieke Huisman and Vasco T. Vasconcelos, presentations of five research papers, and six talks about preliminary or already-published work that could foster interesting discussion during the workshop. These proceedings contain the five accepted research papers, the abstracts of the keynote talks, and a list of the other contributions. | [] | Train |
43,781 | 16 | Title: ExpCLIP: Bridging Text and Facial Expressions via Semantic Alignment
Abstract: The objective of stylized speech-driven facial animation is to create animations that encapsulate specific emotional expressions. Existing methods often depend on pre-established emotional labels or facial expression templates, which may limit the necessary flexibility for accurately conveying user intent. In this research, we introduce a technique that enables the control of arbitrary styles by leveraging natural language as emotion prompts. This technique presents benefits in terms of both flexibility and user-friendliness. To realize this objective, we initially construct a Text-Expression Alignment Dataset (TEAD), wherein each facial expression is paired with several prompt-like descriptions.We propose an innovative automatic annotation method, supported by Large Language Models (LLMs), to expedite the dataset construction, thereby eliminating the substantial expense of manual annotation. Following this, we utilize TEAD to train a CLIP-based model, termed ExpCLIP, which encodes text and facial expressions into semantically aligned style embeddings. The embeddings are subsequently integrated into the facial animation generator to yield expressive and controllable facial animations. Given the limited diversity of facial emotions in existing speech-driven facial animation training data, we further introduce an effective Expression Prompt Augmentation (EPA) mechanism to enable the animation generator to support unprecedented richness in style control. Comprehensive experiments illustrate that our method accomplishes expressive facial animation generation and offers enhanced flexibility in effectively conveying the desired style. | [
14507,
39339,
23638,
25287
] | Train |
43,782 | 10 | Title: Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features
Abstract: Recent studies have demonstrated the susceptibility of deep neural networks to backdoor attacks. Given a backdoored model, its prediction of a poisoned sample with trigger will be dominated by the trigger information, though trigger information and benign information coexist. Inspired by the mechanism of the optical polarizer that a polarizer could pass light waves with particular polarizations while filtering light waves with other polarizations, we propose a novel backdoor defense method by inserting a learnable neural polarizer into the backdoored model as an intermediate layer, in order to purify the poisoned sample via filtering trigger information while maintaining benign information. The neural polarizer is instantiated as one lightweight linear transformation layer, which is learned through solving a well designed bi-level optimization problem, based on a limited clean dataset. Compared to other fine-tuning-based defense methods which often adjust all parameters of the backdoored model, the proposed method only needs to learn one additional layer, such that it is more efficient and requires less clean data. Extensive experiments demonstrate the effectiveness and efficiency of our method in removing backdoors across various neural network architectures and datasets, especially in the case of very limited clean data. | [
19680,
6222,
4903
] | Test |
43,783 | 7 | Title: Sequential Deep Operator Networks (S-DeepONet) for Predicting Full-field Solutions Under Time-dependent Loads
Abstract: Deep Operator Network (DeepONet), a recently introduced deep learning operator network, approximates linear and nonlinear solution operators by taking parametric functions (infinite-dimensional objects) as inputs and mapping them to solution functions in contrast to classical neural networks that need re-training for every new set of parametric inputs. In this work, we have extended the classical formulation of DeepONets by introducing sequential learning models like the gated recurrent unit (GRU) and long short-term memory (LSTM) in the branch network to allow for accurate predictions of the solution contour plots under parametric and time-dependent loading histories. Two example problems, one on transient heat transfer and the other on path-dependent plastic loading, were shown to demonstrate the capabilities of the new architectures compared to the benchmark DeepONet model with a feed-forward neural network (FNN) in the branch. Despite being more computationally expensive, the GRU- and LSTM-DeepONets lowered the prediction error by half (0.06\% vs. 0.12\%) compared to FNN-DeepONet in the heat transfer problem, and by 2.5 times (0.85\% vs. 3\%) in the plasticity problem. In all cases, the proposed DeepONets achieved a prediction $R^2$ value of above 0.995, indicating superior accuracy. Results show that once trained, the proposed DeepONets can accurately predict the final full-field solution over the entire domain and are at least two orders of magnitude faster than direct finite element simulations, rendering it an accurate and robust surrogate model for rapid preliminary evaluations. | [
13424,
2780,
29824
] | Validation |
43,784 | 24 | Title: Generalised f-Mean Aggregation for Graph Neural Networks
Abstract: Graph Neural Network (GNN) architectures are defined by their implementations of update and aggregation modules. While many works focus on new ways to parametrise the update modules, the aggregation modules receive comparatively little attention. Because it is difficult to parametrise aggregation functions, currently most methods select a"standard aggregator"such as $\mathrm{mean}$, $\mathrm{sum}$, or $\mathrm{max}$. While this selection is often made without any reasoning, it has been shown that the choice in aggregator has a significant impact on performance, and the best choice in aggregator is problem-dependent. Since aggregation is a lossy operation, it is crucial to select the most appropriate aggregator in order to minimise information loss. In this paper, we present GenAgg, a generalised aggregation operator, which parametrises a function space that includes all standard aggregators. In our experiments, we show that GenAgg is able to represent the standard aggregators with much higher accuracy than baseline methods. We also show that using GenAgg as a drop-in replacement for an existing aggregator in a GNN often leads to a significant boost in performance across various tasks. | [] | Train |
43,785 | 4 | Title: Bridging the Bubbles: Connecting Academia and Industry in Cybersecurity Research
Abstract: There is a perceived disconnect between how ad hoc industry solutions and academic research solutions in cyber security are developed and applied. Is there a difference in philosophy in how solutions to cyber security problems are developed by industry and by academia. What could academia and industry do to bridge this gap and speed up the development and use of effective cybersecurity solutions? This paper provides an overview of the most critical gaps and solutions identified by an interdisciplinary expert exchange on the topic. The discussion was held in the form of the webinar"Bridging the Bubbles: Connecting Academia and Industry in Cybersecurity Research"in November 2022 as part of the Rogers Cybersecure Catalyst webinar series. Panelists included researchers from academia and industry as well as experts from industry and business development. The key findings and recommendations of this exchange are supported by the relevant scientific literature on the topic within this paper. Different approaches and time frames in development and lifecycle management, challenges in knowledge transfer and communication as well as heterogeneous metrics for success in projects are examples of the evaluated subject areas. | [
8386
] | Train |
43,786 | 16 | Title: ViTO: Vision Transformer-Operator
Abstract: We combine vision transformers with operator learning to solve diverse inverse problems described by partial differential equations (PDEs). Our approach, named ViTO, combines a U-Net based architecture with a vision transformer. We apply ViTO to solve inverse PDE problems of increasing complexity, namely for the wave equation, the Navier-Stokes equations and the Darcy equation. We focus on the more challenging case of super-resolution, where the input dataset for the inverse problem is at a significantly coarser resolution than the output. The results we obtain are comparable or exceed the leading operator network benchmarks in terms of accuracy. Furthermore, ViTO`s architecture has a small number of trainable parameters (less than 10% of the leading competitor), resulting in a performance speed-up of over 5x when averaged over the various test cases. | [
2075,
25821,
37318,
36735
] | Train |
43,787 | 33 | Title: On the Degree of Extension of Some Models Defining Non-Regular Languages
Abstract: This work is a survey of the main results reported for the degree of extension of two models defining non-regular languages, namely the context-free grammar and the extended automaton over groups. More precisely, we recall the main results regarding the degree on non-regularity of a context-free grammar as well as the degree of extension of finite automata over groups. Finally, we consider a similar measure for the finite automata with translucent letters and present some preliminary results. This measure could be considered for many mechanisms that extend a less expressive one. | [] | Validation |
43,788 | 31 | Title: HyHTM: Hyperbolic Geometry based Hierarchical Topic Models
Abstract: Hierarchical Topic Models (HTMs) are useful for discovering topic hierarchies in a collection of documents. However, traditional HTMs often produce hierarchies where lowerlevel topics are unrelated and not specific enough to their higher-level topics. Additionally, these methods can be computationally expensive. We present HyHTM - a Hyperbolic geometry based Hierarchical Topic Models - that addresses these limitations by incorporating hierarchical information from hyperbolic geometry to explicitly model hierarchies in topic models. Experimental results with four baselines show that HyHTM can better attend to parent-child relationships among topics. HyHTM produces coherent topic hierarchies that specialise in granularity from generic higher-level topics to specific lowerlevel topics. Further, our model is significantly faster and leaves a much smaller memory footprint than our best-performing baseline.We have made the source code for our algorithm publicly accessible. | [] | Test |
43,789 | 5 | Title: Parallel two-stage reduction to Hessenberg-triangular form
Abstract: We present a two-stage algorithm for the parallel reduction of a pencil to Hessenberg-triangular form. Traditionally, two-stage Hessenberg-triangular reduction algorithms achieve high performance in the first stage, but struggle to achieve high performance in the second stage. Our algorithm extends techniques described by Karlsson et al. [9] to also achieve high performance in the second stage. Experiments in a shared memory environment demonstrate that the algorithm can outperform state-of-the-art implementations. | [] | Validation |
43,790 | 30 | Title: The StatCan Dialogue Dataset: Retrieving Data Tables through Conversations with Genuine Intents
Abstract: We introduce the StatCan Dialogue Dataset consisting of 19,379 conversation turns between agents working at Statistics Canada and online users looking for published data tables. The conversations stem from genuine intents, are held in English or French, and lead to agents retrieving one of over 5000 complex data tables. Based on this dataset, we propose two tasks: (1) automatic retrieval of relevant tables based on a on-going conversation, and (2) automatic generation of appropriate agent responses at each turn. We investigate the difficulty of each task by establishing strong baselines. Our experiments on a temporal data split reveal that all models struggle to generalize to future conversations, as we observe a significant drop in performance across both tasks when we move from the validation to the test set. In addition, we find that response generation models struggle to decide when to return a table. Considering that the tasks pose significant challenges to existing models, we encourage the community to develop models for our task, which can be directly used to help knowledge workers find relevant tables for live chat users. | [] | Train |
43,791 | 24 | Title: Effective Robustness against Natural Distribution Shifts for Models with Different Training Data
Abstract: ``Effective robustness'' measures the extra out-of-distribution (OOD) robustness beyond what can be predicted from the in-distribution (ID) performance. Existing effective robustness evaluations typically use a single test set such as ImageNet to evaluate ID accuracy. This becomes problematic when evaluating models trained on different data distributions, e.g., comparing models trained on ImageNet vs. zero-shot language-image pre-trained models trained on LAION. In this paper, we propose a new effective robustness evaluation metric to compare the effective robustness of models trained on different data distributions. To do this we control for the accuracy on multiple ID test sets that cover the training distributions for all the evaluated models. Our new evaluation metric provides a better estimate of the effectiveness robustness and explains the surprising effective robustness gains of zero-shot CLIP-like models exhibited when considering only one ID dataset, while the gains diminish under our evaluation. | [
13144
] | Train |
43,792 | 27 | Title: Learning and Adapting Agile Locomotion Skills by Transferring Experience
Abstract: Legged robots have enormous potential in their range of capabilities, from navigating unstructured terrains to high-speed running. However, designing robust controllers for highly agile dynamic motions remains a substantial challenge for roboticists. Reinforcement learning (RL) offers a promising data-driven approach for automatically training such controllers. However, exploration in these high-dimensional, underactuated systems remains a significant hurdle for enabling legged robots to learn performant, naturalistic, and versatile agility skills. We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks. To leverage controllers we can acquire in practice, we design this framework to be flexible in terms of their source -- that is, the controllers may have been optimized for a different objective under different dynamics, or may require different knowledge of the surroundings -- and thus may be highly suboptimal for the target task. We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments. We also demonstrate that the agile behaviors learned in this way are graceful and safe enough to deploy in the real world. | [
710,
8231,
9673,
7535,
10943,
34398,
6559
] | Validation |
43,793 | 26 | Title: Exploring Downvoting in Blockchain-based Online Social Media Platforms
Abstract: In recent years, Blockchain-based Online Social Media (BOSM) platforms have evolved fast due to the advancement of blockchain technology. BOSM can effectively overcome the problems of traditional social media platforms, such as a single point of trust and insufficient incentives for users, by combining a decentralized governance structure and a cryptocurrency-based incentive model, thereby attracting a large number of users and making it a crucial component of Web3. BOSM allows users to downvote low-quality content and aims to decrease the visibility of low-quality content by sorting and filtering it through downvoting. However, this feature may be maliciously exploited by some users to undermine the fairness of the incentive, reduce the quality of highly visible content, and further reduce users’ enthusiasm for content creation and the attractiveness of the platform. In this paper, we study and analyze the downvoting behavior using four years of data collected from Steemit, the largest BOSM platform. We discovered that a significant number of bot accounts were actively downvoting content. In addition, we discovered that roughly 9% of the downvoting activity might be retaliatory. We did not detect any significant instances of downvoting on content for a specific topic. We believe that the findings in this paper will facilitate the future development of user behavior analysis and incentive pattern design in BOSM and Web3. | [] | Train |
43,794 | 8 | Title: Lightweight privacy-preserving truth discovery for vehicular air quality monitoring
Abstract: nan | [] | Train |
43,795 | 16 | Title: SurgicalSAM: Efficient Class Promptable Surgical Instrument Segmentation
Abstract: The Segment Anything Model (SAM) is a powerful foundation model that has revolutionised image segmentation. To apply SAM to surgical instrument segmentation, a common approach is to locate precise points or boxes of instruments and then use them as prompts for SAM in a zero-shot manner. However, we observe two problems with this naive pipeline: (1) the domain gap between natural objects and surgical instruments leads to poor generalisation of SAM; and (2) SAM relies on precise point or box locations for accurate segmentation, requiring either extensive manual guidance or a well-performing specialist detector for prompt preparation, which leads to a complex multi-stage pipeline. To address these problems, we introduce SurgicalSAM, a novel end-to-end efficient-tuning approach for SAM to effectively integrate surgical-specific information with SAM's pre-trained knowledge for improved generalisation. Specifically, we propose a lightweight prototype-based class prompt encoder for tuning, which directly generates prompt embeddings from class prototypes and eliminates the use of explicit prompts for improved robustness and a simpler pipeline. In addition, to address the low inter-class variance among surgical instrument categories, we propose contrastive prototype learning, further enhancing the discrimination of the class prototypes for more accurate class prompting. The results of extensive experiments on both EndoVis2018 and EndoVis2017 datasets demonstrate that SurgicalSAM achieves state-of-the-art performance while only requiring a small number of tunable parameters. The source code will be released at https://github.com/wenxi-yue/SurgicalSAM. | [
19907,
5864,
44854,
14168,
5497,
43102
] | Test |
43,796 | 25 | Title: Unified Modeling of Multi-Talker Overlapped Speech Recognition and Diarization with a Sidecar Separator
Abstract: Multi-talker overlapped speech poses a significant challenge for speech recognition and diarization. Recent research indicated that these two tasks are inter-dependent and complementary, motivating us to explore a unified modeling method to address them in the context of overlapped speech. A recent study proposed a cost-effective method to convert a single-talker automatic speech recognition (ASR) system into a multi-talker one, by inserting a Sidecar separator into the frozen well-trained ASR model. Extending on this, we incorporate a diarization branch into the Sidecar, allowing for unified modeling of both ASR and diarization with a negligible overhead of only 768 parameters. The proposed method yields better ASR results compared to the baseline on LibriMix and LibriSpeechMix datasets. Moreover, without sophisticated customization on the diarization task, our method achieves acceptable diarization results on the two-speaker subset of CALLHOME with only a few adaptation steps. | [
14608
] | Train |
43,797 | 3 | Title: Ethical Considerations Towards Protestware
Abstract: A key drawback to using a Open Source third-party library is the risk of introducing malicious attacks. In recently times, these threats have taken a new form, when maintainers turn their Open Source libraries into protestware. This is defined as software containing political messages delivered through these libraries, which can either be malicious or benign. Since developers are willing to freely open-up their software to these libraries, much trust and responsibility are placed on the maintainers to ensure that the library does what it promises to do. This paper takes a look into the possible scenarios where developers might consider turning their Open Source Software into protestware, using an ethico-philosophical lens. Using different frameworks commonly used in AI ethics, we explore the different dilemmas that may result in protestware. Additionally, we illustrate how an open-source maintainer's decision to protest is influenced by different stakeholders (viz., their membership in the OSS community, their personal views, financial motivations, social status, and moral viewpoints), making protestware a multifaceted and intricate matter. | [] | Train |
43,798 | 28 | Title: Mixing a Covert and a Non-Covert User
Abstract: This paper establishes the fundamental limits of a two-user single-receiver system where communication from User 1 (but not from User 2) needs to be undetectable to an external warden. Our fundamental limits show a tradeoff between the highest rates (or square-root rates) that are simultaneously achievable for the two users. Moreover, coded time-sharing for both users is fundamentally required on most channels, which distinguishes this setup from the more classical setups with either only covert users or only non-covert users. Interestingly, the presence of a non-covert user can be beneficial for improving the covert capacity of the other user. | [] | Validation |
43,799 | 16 | Title: MoStGAN-V: Video Generation with Temporal Motion Styles
Abstract: Video generation remains a challenging task due to spatiotemporal complexity and the requirement of synthesizing diverse motions with temporal consistency. Previous works attempt to generate videos in arbitrary lengths either in an autoregressive manner or regarding time as a continuous signal. However, they struggle to synthesize detailed and diverse motions with temporal coherence and tend to generate repetitive scenes after a few time steps. In this work, we argue that a single time-agnostic latent vector of style-based generator is insufficient to model various and temporally-consistent motions. Hence, we introduce additional time-dependent motion styles to model diverse motion patterns. In addition, a Motion Style Attention modulation mechanism, dubbed as MoStAtt, is proposed to augment frames with vivid dynamics for each specific scale (i.e., layer), which assigns attention score for each motion style w.r.t deconvolution filter weights in the target synthesis layer and softly attends different motion styles for weight modulation. Experimental results show our model achieves state-of-the-art performance on four unconditional 2562video synthesis benchmarks trained with only 3 frames per clip and produces better qualitative results with respect to dynamic motions. Code and videos have been made available at https:/github.com/xiaoqian-shen/MoStGAN-V | [
38696
] | Train |
43,800 | 16 | Title: LSDM: Long-Short Diffeomorphic Motion for Weakly-Supervised Ultrasound Landmark Tracking
Abstract: Accurate tracking of an anatomical landmark over time has been of high interests for disease assessment such as minimally invasive surgery and tumor radiation therapy. Ultrasound imaging is a promising modality benefiting from low-cost and real-time acquisition. However, generating a precise landmark tracklet is very challenging, as attempts can be easily distorted by different interference such as landmark deformation, visual ambiguity and partial observation. In this paper, we propose a long-short diffeomorphic motion network, which is a multi-task framework with a learnable deformation prior to search for the plausible deformation of landmark. Specifically, we design a novel diffeomorphism representation in both long and short temporal domains for delineating motion margins and reducing long-term cumulative tracking errors. To further mitigate local anatomical ambiguity, we propose an expectation maximisation motion alignment module to iteratively optimize both long and short deformation, aligning to the same directional and spatial representation. The proposed multi-task system can be trained in a weakly-supervised manner, which only requires few landmark annotations for tracking and zero annotation for long-short deformation learning. We conduct extensive experiments on two ultrasound landmark tracking datasets. Experimental results show that our proposed method can achieve better or competitive landmark tracking performance compared with other state-of-the-art tracking methods, with a strong generalization capability across different scanner types and different ultrasound modalities. | [] | Test |
43,801 | 16 | Title: Semantic Embedded Deep Neural Network: A Generic Approach to Boost Multi-Label Image Classification Performance
Abstract: Fine-grained multi-label classification models have broad applications in e-commerce, such as visual based label predictions ranging from fashion attribute detection to brand recognition. One challenge to achieve satisfactory performance for those classification tasks in real world is the wild visual background signal that contains irrelevant pixels which confuses model to focus onto the region of interest and make prediction upon the specific region. In this paper, we introduce a generic semantic-embedding deep neural network to apply the spatial awareness semantic feature incorporating a channel-wise attention based model to leverage the localization guidance to boost model performance for multi-label prediction. We observed an Avg.relative improvement of 15.27% in terms of AUC score across all labels compared to the baseline approach. Core experiment and ablation studies involve multi-label fashion attribute classification performed on Instagram fashion apparels' image. We compared the model performances among our approach, baseline approach, and 3 alternative approaches to leverage semantic features. Results show favorable performance for our approach. | [
2881,
33697,
44615,
14349,
33039,
13682
] | Train |
43,802 | 6 | Title: Balancing the Digital and the Physical: Discussing Push and Pull Factors for Digital Well-being
Abstract: This position paper discusses the negative effects of excessive smartphone usage on mental health and well-being. Despite efforts to limit smartphone usage, users become desensitized to reminders and limitations. The paper proposes the use of Push&Pull Factors to contextualize intervention strategies promoting digital well-being. Further, alternative metrics to measure the effectiveness of intervention strategies will be discussed. | [] | Train |
43,803 | 34 | Title: 3-Coloring C4 or C3-Free Diameter Two Graphs
Abstract: nan | [] | Train |
43,804 | 24 | Title: Rethinking Gauss-Newton for learning over-parameterized models
Abstract: This work studies the global convergence and generalization properties of Gauss Newton's (GN) when optimizing one-hidden layer networks in the over-parameterized regime. We first establish a global convergence result for GN in the continuous-time limit exhibiting a faster convergence rate compared to GD due to improved conditioning. We then perform an empirical study on a synthetic regression task to investigate the implicit bias of GN's method. We find that, while GN is consistently faster than GD in finding a global optimum, the performance of the learned model on a test dataset is heavily influenced by both the learning rate and the variance of the randomly initialized network's weights. Specifically, we find that initializing with a smaller variance results in a better generalization, a behavior also observed for GD. However, in contrast to GD where larger learning rates lead to the best generalization, we find that GN achieves an improved generalization when using smaller learning rates, albeit at the cost of slower convergence. This study emphasizes the significance of the learning rate in balancing the optimization speed of GN with the generalization ability of the learned solution. | [
25992
] | Validation |
43,805 | 16 | Title: TSTTC: A Large-Scale Dataset for Time-to-Contact Estimation in Driving Scenarios
Abstract: Time-to-Contact (TTC) estimation is a critical task for assessing collision risk and is widely used in various driver assistance and autonomous driving systems. The past few decades have witnessed development of related theories and algorithms. The prevalent learning-based methods call for a large-scale TTC dataset in real-world scenarios. In this work, we present a large-scale object oriented TTC dataset in the driving scene for promoting the TTC estimation by a monocular camera. To collect valuable samples and make data with different TTC values relatively balanced, we go through thousands of hours of driving data and select over 200K sequences with a preset data distribution. To augment the quantity of small TTC cases, we also generate clips using the latest Neural rendering methods. Additionally, we provide several simple yet effective TTC estimation baselines and evaluate them extensively on the proposed dataset to demonstrate their effectiveness. The proposed dataset is publicly available at https://open-dataset.tusen.ai/TSTTC. | [
26484
] | Validation |
43,806 | 24 | Title: Master's Thesis: Out-of-distribution Detection with Energy-based Models
Abstract: Today, deep learning is increasingly applied in security-critical situations such as autonomous driving and medical diagnosis. Despite its success, the behavior and robustness of deep networks are not fully understood yet, posing a significant risk. In particular, researchers recently found that neural networks are overly confident in their predictions, even on data they have never seen before. To tackle this issue, one can differentiate two approaches in the literature. One accounts for uncertainty in the predictions, while the second estimates the underlying density of the training data to decide whether a given input is close to the training data, and thus the network is able to perform as expected.In this thesis, we investigate the capabilities of EBMs at the task of fitting the training data distribution to perform detection of out-of-distribution (OOD) inputs. We find that on most datasets, EBMs do not inherently outperform other density estimators at detecting OOD data despite their flexibility. Thus, we additionally investigate the effects of supervision, dimensionality reduction, and architectural modifications on the performance of EBMs. Further, we propose Energy-Prior Network (EPN) which enables estimation of various uncertainties within an EBM for classification, bridging the gap between two approaches for tackling the OOD detection problem. We identify a connection between the concentration parameters of the Dirichlet distribution and the joint energy in an EBM. Additionally, this allows optimization without a held-out OOD dataset, which might not be available or costly to collect in some applications. Finally, we empirically demonstrate that Energy-Prior Network (EPN) is able to detect OOD inputs, datasets shifts, and adversarial examples. Theoretically, EPN offers favorable properties for the asymptotic case when inputs are far from the training data. | [] | Test |
43,807 | 30 | Title: A negation detection assessment of GPTs: analysis with the xNot360 dataset
Abstract: Negation is a fundamental aspect of natural language, playing a critical role in communication and comprehension. Our study assesses the negation detection performance of Generative Pre-trained Transformer (GPT) models, specifically GPT-2, GPT-3, GPT-3.5, and GPT-4. We focus on the identification of negation in natural language using a zero-shot prediction approach applied to our custom xNot360 dataset. Our approach examines sentence pairs labeled to indicate whether the second sentence negates the first. Our findings expose a considerable performance disparity among the GPT models, with GPT-4 surpassing its counterparts and GPT-3.5 displaying a marked performance reduction. The overall proficiency of the GPT models in negation detection remains relatively modest, indicating that this task pushes the boundaries of their natural language understanding capabilities. We not only highlight the constraints of GPT models in handling negation but also emphasize the importance of logical reliability in high-stakes domains such as healthcare, science, and law. | [
13700,
37349,
15301,
4071,
35388
] | Train |
43,808 | 2 | Title: Process-Algebraic Models of Multi-Writer Multi-Reader Non-Atomic Registers
Abstract: We present process-algebraic models of multi-writer multi-reader safe, regular and atomic registers. We establish the relationship between our models and alternative versions presented in the literature. We use our models to formally analyse by model checking to what extent several well-known mutual exclusion algorithms are robust for relaxed atomicity requirements. Our analyses refute correctness claims made about some of these algorithms in the literature. | [] | Train |
43,809 | 4 | Title: Demystifying security and compatibility issues in Android Apps
Abstract: Never before has any OS been so popular as Android. Existing mobile phones are not simply devices for making phone calls and receiving SMS messages, but powerful communication and entertainment platforms for web surfing, social networking, etc. Even though the Android OS offers powerful communication and application execution capabilities, it is riddled with defects (e.g., security risks, and compatibility issues), new vulnerabilities come to light daily, and bugs cost the economy tens of billions of dollars annually. For example, malicious apps (e.g., back-doors, fraud apps, ransomware, spyware, etc.) are reported [Google, 2022] to exhibit malicious behaviours, including privacy stealing, unwanted programs installed, etc. To counteract these threats, many works have been proposed that rely on static analysis techniques to detect such issues. However, static techniques are not sufficient on their own to detect such defects precisely. This will likely yield false positive results as static analysis has to make some trade-offs when handling complicated cases (e.g., object-sensitive vs. object-insensitive). In addition, static analysis techniques will also likely suffer from soundness issues because some complicated features (e.g., reflection, obfuscation, and hardening) are difficult to be handled [Sun et al., 2021b, Samhi et al., 2022]. | [] | Train |
43,810 | 27 | Title: Safe Collision and Clamping Reaction for Parallel Robots During Human-Robot Collaboration
Abstract: Parallel robots (PRs) offer the potential for safe human-robot collaboration because of their low moving masses. Due to the in-parallel kinematic chains, the risk of contact in the form of collisions and clamping at a chain increases. Ensuring safety is investigated in this work through various contact reactions on a real planar PR. External forces are estimated based on proprioceptive information and a dynamics model, which allows contact detection. Retraction along the direction of the estimated line of action provides an instantaneous response to limit the occurring contact forces within the experiment to 70N at a maximum velocity 0.4m/s. A reduction in the stiffness of a Cartesian impedance control is investigated as a further strategy. For clamping, a feedforward neural network (FNN) is trained and tested in different joint angle configurations to classify whether a collision or clamping occurs with an accuracy of 80%. A second FNN classifies the clamping kinematic chain to enable a subsequent kinematic projection of the clamping joint angle onto the rotational platform coordinates. In this way, a structure opening is performed in addition to the softer retraction movement. The reaction strategies are compared in real-world experiments at different velocities and controller stiffnesses to demonstrate their effectiveness. The results show that in all collision and clamping experiments the PR terminates the contact in less than 130ms. | [
44609,
25937
] | Validation |
43,811 | 2 | Title: Preservation theorems for Tarski's relation algebra
Abstract: We investigate a number of semantically defined fragments of Tarski's algebra of binary relations, including the function-preserving fragment. We address the question whether they are generated by a finite set of operations. We obtain several positive and negative results along these lines. Specifically, the homomorphism-safe fragment is finitely generated (both over finite and over arbitrary structures). The function-preserving fragment is not finitely generated (and, in fact, not expressible by any finite set of guarded second-order definable function-preserving operations). Similarly, the total-function-preserving fragment is not finitely generated (and, in fact, not expressible by any finite set of guarded second-order definable total-function-preserving operations). In contrast, the forward-looking function-preserving fragment is finitely generated by composition, intersection, antidomain, and preferential union. Similarly, the forward-and-backward-looking injective-function-preserving fragment is finitely generated by composition, intersection, antidomain, inverse, and an `injective union' operation. | [] | Test |
43,812 | 30 | Title: Adversarial Fine-Tuning of Language Models: An Iterative Optimisation Approach for the Generation and Detection of Problematic Content
Abstract: In this paper, we tackle the emerging challenge of unintended harmful content generation in Large Language Models (LLMs) with a novel dual-stage optimisation technique using adversarial fine-tuning. Our two-pronged approach employs an adversarial model, fine-tuned to generate potentially harmful prompts, and a judge model, iteratively optimised to discern these prompts. In this adversarial cycle, the two models seek to outperform each other in the prompting phase, generating a dataset of rich examples which are then used for fine-tuning. This iterative application of prompting and fine-tuning allows continuous refinement and improved performance. The performance of our approach is evaluated through classification accuracy on a dataset consisting of problematic prompts not detected by GPT-4, as well as a selection of contentious but unproblematic prompts. We show considerable increase in classification accuracy of the judge model on this challenging dataset as it undergoes the optimisation process. Furthermore, we show that a rudimentary model \texttt{ada} can achieve 13\% higher accuracy on the hold-out test set than GPT-4 after only a few rounds of this process, and that this fine-tuning improves performance in parallel tasks such as toxic comment identification. | [
40192,
8673,
14337,
15601,
7796,
45079
] | Validation |
43,813 | 24 | Title: Control of a simulated MRI scanner with deep reinforcement learning
Abstract: Magnetic resonance imaging (MRI) is a highly versatile and widely used clinical imaging tool. The content of MRI images is controlled by an acquisition sequence, which coordinates the timing and magnitude of the scanner hardware activations, which shape and coordinate the magnetisation within the body, allowing a coherent signal to be produced. The use of deep reinforcement learning (DRL) to control this process, and determine new and efficient acquisition strategies in MRI, has not been explored. Here, we take a first step into this area, by using DRL to control a virtual MRI scanner, and framing the problem as a game that aims to efficiently reconstruct the shape of an imaging phantom using partially reconstructed magnitude images. Our findings demonstrate that DRL successfully completed two key tasks: inducing the virtual MRI scanner to generate useful signals and interpreting those signals to determine the phantom's shape. This proof-of-concept study highlights the potential of DRL in autonomous MRI data acquisition, shedding light on the suitability of DRL for complex tasks, with limited supervision, and without the need to provide human-readable outputs. | [] | Test |
43,814 | 3 | Title: Building Dynamic Ontological Models for Place using Social Media Data from Twitter and Sina Weibo
Abstract: Place holds human thoughts and experiences. Space is defined with geometric measurement and coordinate systems. Social media served as the connection between place and space. In this study, we use social media data (Twitter, Weibo) to build a dynamic ontological model in two separate areas: Beijing, China and San Diego, the U.S.A. Three spatial analytics methods are utilized to generate the place name ontology: 1) Kernel Density Estimation (KDE); 2) Dynamic Method Density-based spatial clustering of applications with noise (DBSCAN); 3) hierarchal clustering. We identified feature types of place name ontologies from geotagged social media data and classified them by comparing their default search radius of KDE of geo-tagged points. By tracing the seasonal changes of highly dynamic non-administrative places, seasonal variation patterns were observed, which illustrates the dynamic changes in place ontology caused by the change in human activities and conversation over time and space. We also investigate the semantic meaning of each place name by examining Pointwise Mutual Information (PMI) scores and word clouds. The major contribution of this research is to link and analyze the associations between place, space, and their attributes in the field of geography. Researchers can use crowd-sourced data to study the ontology of places rather than relying on traditional gazetteers. The dynamic ontology in this research can provide bright insight into urban planning and re-zoning and other related industries. | [] | Test |
43,815 | 37 | Title: A Simple Yet High-Performing On-disk Learned Index: Can We Have Our Cake and Eat it Too?
Abstract: While in-memory learned indexes have shown promising performance as compared to B+-tree, most widely used databases in real applications still rely on disk-based operations. Based on our experiments, we observe that directly applying the existing learned indexes on disk suffers from several drawbacks and cannot outperform a standard B+-tree in most cases. Therefore, in this work we make the first attempt to show how the idea of learned index can benefit the on-disk index by proposing AULID, a fully on-disk updatable learned index that can achieve state-of-the-art performance across multiple workload types. The AULID approach combines the benefits from both traditional indexing techniques and the learned indexes to reduce the I/O cost, the main overhead under disk setting. Specifically, three aspects are taken into consideration in reducing I/O costs: (1) reduce the overhead in updating the index structure; (2) induce shorter paths from root to leaf node; (3) achieve better locality to minimize the number of block reads required to complete a scan. Five principles are proposed to guide the design of AULID which shows remarkable performance gains and meanwhile is easy to implement. Our evaluation shows that AULID has comparable storage costs to a B+-tree and is much smaller than other learned indexes, and AULID is up to 2.11x, 8.63x, 1.72x, 5.51x, and 8.02x more efficient than FITing-tree, PGM, B+-tree, ALEX, and LIPP. | [] | Train |
43,816 | 26 | Title: BotArtist: Twitter bot detection Machine Learning model based on Twitter suspension
Abstract: Twitter as one of the most popular social networks, offers a means for communication and online discourse, which unfortunately has been the target of bots and fake accounts, leading to the manipulation and spreading of false information. Towards this end, we gather a challenging, multilingual dataset of social discourse on Twitter, originating from 9M users regarding the recent Russo-Ukrainian war, in order to detect the bot accounts and the conversation involving them. We collect the ground truth for our dataset through the Twitter API suspended accounts collection, containing approximately 343K of bot accounts and 8M of normal users. Additionally, we use a dataset provided by Botometer-V3 with 1,777 Varol, 483 German accounts, and 1,321 US accounts. Besides the publicly available datasets, we also manage to collect 2 independent datasets around popular discussion topics of the 2022 energy crisis and the 2022 conspiracy discussions. Both of the datasets were labeled according to the Twitter suspension mechanism. We build a novel ML model for bot detection using the state-of-the-art XGBoost model. We combine the model with a high volume of labeled tweets according to the Twitter suspension mechanism ground truth. This requires a limited set of profile features allowing labeling of the dataset in different time periods from the collection, as it is independent of the Twitter API. In comparison with Botometer our methodology achieves an average 11% higher ROC-AUC score over two real-case scenario datasets. | [] | Test |
43,817 | 36 | Title: Votemandering: Strategies and Fairness in Political Redistricting
Abstract: Gerrymandering, the deliberate manipulation of electoral district boundaries for political advantage, is a persistent issue in U.S. redistricting cycles. This paper introduces and analyzes a new phenomenon, 'votemandering'- a strategic blend of gerrymandering and targeted political campaigning, devised to gain more seats by circumventing fairness measures. It leverages accurate demographic and socio-political data to influence voter decisions, bolstered by advancements in technology and data analytics, and executes better-informed redistricting. Votemandering is established as a Mixed Integer Program (MIP) that performs fairness-constrained gerrymandering over multiple election rounds, via unit-specific variables for campaigns. To combat votemandering, we present a computationally efficient heuristic for creating and testing district maps that more robustly preserve voter preferences. We analyze the influence of various redistricting constraints and parameters on votemandering efficacy. We explore the interconnectedness of gerrymandering, substantial campaign budgets, and strategic campaigning, illustrating their collective potential to generate biased electoral maps. A Wisconsin State Senate redistricting case study substantiates our findings on real data, demonstrating how major parties can secure additional seats through votemandering. Our findings underscore the practical implications of these manipulations, stressing the need for informed policy and regulation to safeguard democratic processes. | [
1125
] | Train |
43,818 | 6 | Title: Hyper-automation-The next peripheral for automation in IT industries
Abstract: The extension of legacy business process automation beyond the bounds of specific processes is known as hyperautomation. Hyperautomation provides automation for nearly any repetitive action performed by business users by combining AI tools with RPA. It automates complex IT business processes that a company's top brains might not be able to complete. This is an end-to-end automation of a standard business process deployment. It enables automation to perform task digitalization by combining a brain computer interface (BCI) with AI and RPA automation tools. BCI, in conjunction with automation tools, will advance the detection and generation of automation processes to the next level. It allows enterprises to combine business intelligence systems, address complex requirements, and enhance human expertise and automation experience. Hyperautomation and its importance in today's environment are briefly discussed in this paper. The article then goes on to discuss how BCI and sensors might aid Hyperautomation. The specific sectors of solicitations were examined using a variety of flexible technologies associated to this concept, as well as dedicated workflow techniques, which are also diagrammatically illustrated. Hyperautomation is being utilized to improve the efficiency, accuracy, and human enhancement of automated tasks dramatically. It incorporates a number of automated tools in its discovery, implementation, and automation phases. As a result, it's well-suited to integrating cutting-edge technologies and experimenting with new methods of working. Keywords- Hyperautomation, Brain computer Interface (BCI), Technology, Used case, Sensors, Industries. | [] | Train |
43,819 | 24 | Title: n-Step Temporal Difference Learning with Optimal n
Abstract: We consider the problem of finding the optimal value of n in the n-step temporal difference (TD) learning algorithm. We find the optimal n by resorting to a model-free optimization technique involving a one-simulation simultaneous perturbation stochastic approximation (SPSA) based procedure that we adopt to the discrete optimization setting by using a random projection approach. We prove the convergence of our proposed algorithm, SDPSA, using a differential inclusions approach and show that it finds the optimal value of n in n-step TD. Through experiments, we show that the optimal value of n is achieved with SDPSA for arbitrary initial values. | [] | Train |
43,820 | 8 | Title: Wirelessly Powered Federated Learning Networks: Joint Power Transfer, Data Sensing, Model Training, and Resource Allocation
Abstract: Federated learning (FL) has found many successes in wireless networks; however, the implementation of FL has been hindered by the energy limitation of mobile devices (MDs) and the availability of training data at MDs. How to integrate wireless power transfer and mobile crowdsensing towards sustainable FL solutions is a research topic entirely missing from the open literature. This work for the first time investigates a resource allocation problem in collaborative sensing-assisted sustainable FL (S2FL) networks with the goal of minimizing the total completion time. We investigate a practical harvesting-sensing-training-transmitting protocol in which energy-limited MDs first harvest energy from RF signals, use it to gain a reward for user participation, sense the training data from the environment, train the local models at MDs, and transmit the model updates to the server. The total completion time minimization problem of jointly optimizing power transfer, transmit power allocation, data sensing, bandwidth allocation, local model training, and data transmission is complicated due to the non-convex objective function, highly non-convex constraints, and strongly coupled variables. We propose a computationally-efficient path-following algorithm to obtain the optimal solution via the decomposition technique. In particular, inner convex approximations are developed for the resource allocation subproblem, and the subproblems are performed alternatively in an iterative fashion. Simulation results are provided to evaluate the effectiveness of the proposed S2FL algorithm in reducing the completion time up to 21.45% in comparison with other benchmark schemes. Further, we investigate an extension of our work from frequency division multiple access (FDMA) to non-orthogonal multiple access (NOMA) and show that NOMA can speed up the total completion time 8.36% on average of the considered FL system. | [] | Train |
43,821 | 34 | Title: On Sensitivity of Compact Directed Acyclic Word Graphs
Abstract: Compact directed acyclic word graphs (CDAWGs) [Blumer et al. 1987] are a fundamental data structure on strings with applications in text pattern searching, data compression, and pattern discovery. Intuitively, the CDAWG of a string $T$ is obtained by merging isomorphic subtrees of the suffix tree [Weiner 1973] of the same string $T$, thus CDAWGs are a compact indexing structure. In this paper, we investigate the sensitivity of CDAWGs when a single character edit operation (insertion, deletion, or substitution) is performed at the left-end of the input string $T$, namely, we are interested in the worst-case increase in the size of the CDAWG after a left-end edit operation. We prove that if $e$ is the number of edges of the CDAWG for string $T$, then the number of new edges added to the CDAWG after a left-end edit operation on $T$ is less than $e$. Further, we present almost matching lower bounds on the sensitivity of CDAWGs for all cases of insertion, deletion, and substitution. | [] | Test |
43,822 | 16 | Title: Uncertainty-Guided Spatial Pruning Architecture for Efficient Frame Interpolation
Abstract: The video frame interpolation (VFI) model applies the convolution operation to all locations, leading to redundant computations in regions with easy motion. We can use dynamic spatial pruning method to skip redundant computation, but this method cannot properly identify easy regions in VFI tasks without supervision. In this paper, we develop an Uncertainty-Guided Spatial Pruning (UGSP) architecture to skip redundant computation for efficient frame interpolation dynamically. Specifically, pixels with low uncertainty indicate easy regions, where the calculation can be reduced without bringing undesirable visual results. Therefore, we utilize uncertainty-generated mask labels to guide our UGSP in properly locating the easy region. Furthermore, we propose a self-contrast training strategy that leverages an auxiliary non-pruning branch to improve the performance of our UGSP. Extensive experiments show that UGSP maintains performance but reduces FLOPs by 34%/52%/30% compared to baseline without pruning on Vimeo90K/UCF101/MiddleBury datasets. In addition, our method achieves state-of-the-art performance with lower FLOPs on multiple benchmarks. | [] | Train |
43,823 | 6 | Title: The Power of Nudging: Exploring Three Interventions for Metacognitive Skills Instruction across Intelligent Tutoring Systems
Abstract: Deductive domains are typical of many cognitive skills in that no single problem-solving strategy is always optimal for solving all problems. It was shown that students who know how and when to use each strategy (StrTime) outperformed those who know neither and stick to the default strategy (Default). In this work, students were trained on a logic tutor that supports a default forward-chaining and a backward-chaining (BC) strategy, then a probability tutor that only supports BC. We investigated three types of interventions on teaching the Default students how and when to use which strategy on the logic tutor: Example, Nudge and Presented. Meanwhile, StrTime students received no interventions. Overall, our results show that Nudge outperformed their Default peers and caught up with StrTime on both tutors. | [
27554,
28811,
38919
] | Train |
43,824 | 30 | Title: Types of Approaches, Applications and Challenges in the Development of Sentiment Analysis Systems
Abstract: Today, the web has become a mandatory platform to express users' opinions, emotions and feelings about various events. Every person using his smartphone can give his opinion about the purchase of a product, the occurrence of an accident, the occurrence of a new disease, etc. in blogs and social networks such as (Twitter, WhatsApp, Telegram and Instagram) register. Therefore, millions of comments are recorded daily and it creates a huge volume of unstructured text data that can extract useful knowledge from this type of data by using natural language processing methods. Sentiment analysis is one of the important applications of natural language processing and machine learning, which allows us to analyze the sentiments of comments and other textual information recorded by web users. Therefore, the analysis of sentiments, approaches and challenges in this field will be explained in the following. | [] | Test |
43,825 | 30 | Title: HopPG: Self-Iterative Program Generation for Multi-Hop Question Answering over Heterogeneous Knowledge
Abstract: The semantic parsing-based method is an important research branch for knowledge-based question answering. It usually generates executable programs lean upon the question and then conduct them to reason answers over a knowledge base. Benefit from this inherent mechanism, it has advantages in the performance and the interpretability. However, traditional semantic parsing methods usually generate a complete program before executing it, which struggles with multi-hop question answering over heterogeneous knowledge. On one hand, generating a complete multi-hop program relies on multiple heterogeneous supporting facts, and it is difficult for generators to understand these facts simultaneously. On the other hand, this way ignores the semantic information of the intermediate answers at each hop, which is beneficial for subsequent generation. To alleviate these challenges, we propose a self-iterative framework for multi-hop program generation (HopPG) over heterogeneous knowledge, which leverages the previous execution results to retrieve supporting facts and generate subsequent programs hop by hop. We evaluate our model on MMQA-T^2, and the experimental results show that HopPG outperforms existing semantic-parsing-based baselines, especially on the multi-hop questions. | [] | Train |
43,826 | 24 | Title: An Asynchronous Decentralized Algorithm for Wasserstein Barycenter Problem
Abstract: Wasserstein Barycenter Problem (WBP) has recently received much attention in the field of artificial intelligence. In this paper, we focus on the decentralized setting for WBP and propose an asynchronous decentralized algorithm (A$^2$DWB). A$^2$DWB is induced by a novel stochastic block coordinate descent method to optimize the dual of entropy regularized WBP. To our knowledge, A$^2$DWB is the first asynchronous decentralized algorithm for WBP. Unlike its synchronous counterpart, it updates local variables in a manner that only relies on the stale neighbor information, which effectively alleviate the waiting overhead, and thus substantially improve the time efficiency. Empirical results validate its superior performance compared to the latest synchronous algorithm. | [] | Validation |
43,827 | 30 | Title: Knowing-how & Knowing-that: A New Task for Machine Reading Comprehension of User Manuals
Abstract: The machine reading comprehension (MRC) of user manuals has huge potential in customer service. However, current methods have trouble answering complex questions. Therefore, we introduce the Knowing-how&Knowing-that task that requires the model to answer factoid-style, procedure-style, and inconsistent questions about user manuals. We resolve this task by jointly representing the steps and facts in a graph TARA, which supports a unified inference of various questions. Towards a systematical benchmarking study, we design a heuristic method to automatically parse user manuals into TARAs and build an annotated dataset to test the model's ability in answering real-world questions. Empirical results demonstrate that representing user manuals as TARAs is a desired solution for the MRC of user manuals. An in-depth investigation of TARA further sheds light on the issues and broader impacts of future representations of user manuals. We hope our work can move the MRC of user manuals to a more complex and realistic stage. | [
201
] | Test |
43,828 | 24 | Title: Explore and Exploit the Diverse Knowledge in Model Zoo for Domain Generalization
Abstract: The proliferation of pretrained models, as a result of advancements in pretraining techniques, has led to the emergence of a vast zoo of publicly available models. Effectively utilizing these resources to obtain models with robust out-of-distribution generalization capabilities for downstream tasks has become a crucial area of research. Previous research has primarily focused on identifying the most powerful models within the model zoo, neglecting to fully leverage the diverse inductive biases contained within. This paper argues that the knowledge contained in weaker models is valuable and presents a method for leveraging the diversity within the model zoo to improve out-of-distribution generalization capabilities. Specifically, we investigate the behaviors of various pretrained models across different domains of downstream tasks by characterizing the variations in their encoded representations in terms of two dimensions: diversity shift and correlation shift. This characterization enables us to propose a new algorithm for integrating diverse pretrained models, not limited to the strongest models, in order to achieve enhanced out-of-distribution generalization performance. Our proposed method demonstrates state-of-the-art empirical results on a variety of datasets, thus validating the benefits of utilizing diverse knowledge. | [] | Train |
43,829 | 6 | Title: PokAR: Facilitating Poker Play Through Augmented Reality
Abstract: We introduce PokAR, an augmented reality (AR) application to facilitate poker play. PokAR aims to alleviate three difficulties of traditional poker by leveraging AR technology: (1) need to have physical poker chips, (2) complex rules of poker, (3) slow game pace caused by laborious tasks. Despite the potential benefits of AR in poker, not much research has been done in the field. In fact, PokAR is the first application to enable AR poker on a mobile device without requiring extra costly equipment. This has been done by creating a Snapchat Lens which can be used on most mobile devices. We evaluated this application by instructing 4 participant dyads to use PokAR to engage in poker play and respond to survey questions about their experience. We found that most PokAR features were positively received, AR did not significantly improve nor hinder socialization, PokAR slightly increased the game pace, and participants had an overall enjoyable experience with the Lens. These findings led to three major conclusions: (1) AR has the potential to augment and simplify traditional table games, (2) AR should not be used to replace traditional experiences, only augment them, (3) Future work includes additional features like increased tactility and statistical annotations. | [] | Validation |
43,830 | 16 | Title: Uncurated Image-Text Datasets: Shedding Light on Demographic Bias
Abstract: The increasing tendency to collect large and uncurated datasets to train vision-and-language models has raised concerns about fair representations. It is known that even small but manually annotated datasets, such as MSCOCO, are affected by societal bias. This problem, far from being solved, may be getting worse with data crawled from the Internet without much control. In addition, the lack of tools to analyze societal bias in big collections of images makes addressing the problem extremely challenging. Our first contribution is to annotate part of the Google Conceptual Captions dataset, widely used for training vision-and-language models, with four demographic and two contextual attributes. Our second contribution is to conduct a comprehensive analysis of the annotations, focusing on how different demographic groups are represented. Our last contribution lies in evaluating three prevailing vision-and-language tasks: image captioning, text-image CLIP embeddings, and text-to-image generation, showing that societal bias is a persistent problem in all of them. https://github.com/noagarcia/phase | [
9181,
26004,
3157
] | Test |
43,831 | 24 | Title: SAM operates far from home: eigenvalue regularization as a dynamical phenomenon
Abstract: The Sharpness Aware Minimization (SAM) optimization algorithm has been shown to control large eigenvalues of the loss Hessian and provide generalization benefits in a variety of settings. The original motivation for SAM was a modified loss function which penalized sharp minima; subsequent analyses have also focused on the behavior near minima. However, our work reveals that SAM provides a strong regularization of the eigenvalues throughout the learning trajectory. We show that in a simplified setting, SAM dynamically induces a stabilization related to the edge of stability (EOS) phenomenon observed in large learning rate gradient descent. Our theory predicts the largest eigenvalue as a function of the learning rate and SAM radius parameters. Finally, we show that practical models can also exhibit this EOS stabilization, and that understanding SAM must account for these dynamics far away from any minima. | [
1136
] | Test |
43,832 | 5 | Title: Mitigation of liveness attacks in DAG-based ledgers
Abstract: The robust construction of the ledger data structure is an essential ingredient for the safe operation of a distributed ledger. While in traditional linear blockchain systems, permission to append to the structure is leader-based, in Directed Acyclic Graph-based ledgers, the writing access can be organised leaderless. However, this leaderless approach relies on fair treatment of non-referenced blocks, i.e. tips, by honest block issuers. We study the impact of a deviation from the standard tip selection by a subset of block issuers with the aim of halting the confirmation of honest blocks entirely. W e provide models on this so-called orphanage of blocks and validate these through open-sourced simulation studies. A critical threshold for the adversary issuance rate is shown to exist, above which the tip pool becomes unstable, while for values below the orphanage decrease exponentially. We study the robustness of the protocol with an expiration time on tips, also called garbage collection, and modification of the parent references per block. | [
17332
] | Train |
43,833 | 16 | Title: Overcoming Language Bias in Remote Sensing Visual Question Answering via Adversarial Training
Abstract: The Visual Question Answering (VQA) system offers a user-friendly interface and enables human-computer interaction. However, VQA models commonly face the challenge of language bias, resulting from the learned superficial correlation between questions and answers. To address this issue, in this study, we present a novel framework to reduce the language bias of the VQA for remote sensing data (RSVQA). Specifically, we add an adversarial branch to the original VQA framework. Based on the adversarial branch, we introduce two regularizers to constrain the training process against language bias. Furthermore, to evaluate the performance in terms of language bias, we propose a new metric that combines standard accuracy with the performance drop when incorporating question and random image information. Experimental results demonstrate the effectiveness of our method. We believe that our method can shed light on future work for reducing language bias on the RSVQA task. | [
9483,
24750
] | Train |
43,834 | 30 | Title: MooseNet: A Trainable Metric for Synthesized Speech with a PLDA Module
Abstract: We present MooseNet, a trainable speech metric that predicts the listeners' Mean Opinion Score (MOS). We propose a novel approach where the Probabilistic Linear Discriminative Analysis (PLDA) generative model is used on top of an embedding obtained from a self-supervised learning (SSL) neural network (NN) model. We show that PLDA works well with a non-finetuned SSL model when trained only on 136 utterances (ca. one minute training time) and that PLDA consistently improves various neural MOS prediction models, even state-of-the-art models with task-specific fine-tuning. Our ablation study shows PLDA training superiority over SSL model fine-tuning in a low-resource scenario. We also improve SSL model fine-tuning using a convenient optimizer choice and additional contrastive and multi-task training objectives. The fine-tuned MooseNet NN with the PLDA module achieves the best results, surpassing the SSL baseline on the VoiceMOS Challenge data. | [] | Train |
43,835 | 16 | Title: INV: Towards Streaming Incremental Neural Videos
Abstract: Recent works in spatiotemporal radiance fields can produce photorealistic free-viewpoint videos. However, they are inherently unsuitable for interactive streaming scenarios (e.g. video conferencing, telepresence) because have an inevitable lag even if the training is instantaneous. This is because these approaches consume videos and thus have to buffer chunks of frames (often seconds) before processing. In this work, we take a step towards interactive streaming via a frame-by-frame approach naturally free of lag. Conventional wisdom believes that per-frame NeRFs are impractical due to prohibitive training costs and storage. We break this belief by introducing Incremental Neural Videos (INV), a per-frame NeRF that is efficiently trained and streamable. We designed INV based on two insights: (1) Our main finding is that MLPs naturally partition themselves into Structure and Color Layers, which store structural and color/texture information respectively. (2) We leverage this property to retain and improve upon knowledge from previous frames, thus amortizing training across frames and reducing redundant learning. As a result, with negligible changes to NeRF, INV can achieve good qualities (>28.6db) in 8min/frame. It can also outperform prior SOTA in 19% less training time. Additionally, our Temporal Weight Compression reduces the per-frame size to 0.3MB/frame (6.6% of NeRF). More importantly, INV is free from buffer lag and is naturally fit for streaming. While this work does not achieve real-time training, it shows that incremental approaches like INV present new possibilities in interactive 3D streaming. Moreover, our discovery of natural information partition leads to a better understanding and manipulation of MLPs. Code and dataset will be released soon. | [
12161
] | Train |
43,836 | 16 | Title: GenPose: Generative Category-level Object Pose Estimation via Diffusion Models
Abstract: Object pose estimation plays a vital role in embodied AI and computer vision, enabling intelligent agents to comprehend and interact with their surroundings. Despite the practicality of category-level pose estimation, current approaches encounter challenges with partially observed point clouds, known as the multihypothesis issue. In this study, we propose a novel solution by reframing categorylevel object pose estimation as conditional generative modeling, departing from traditional point-to-point regression. Leveraging score-based diffusion models, we estimate object poses by sampling candidates from the diffusion model and aggregating them through a two-step process: filtering out outliers via likelihood estimation and subsequently mean-pooling the remaining candidates. To avoid the costly integration process when estimating the likelihood, we introduce an alternative method that trains an energy-based model from the original score-based model, enabling end-to-end likelihood estimation. Our approach achieves state-of-the-art performance on the REAL275 dataset, surpassing 50% and 60% on strict 5d2cm and 5d5cm metrics, respectively. Furthermore, our method demonstrates strong generalizability to novel categories sharing similar symmetric properties without fine-tuning and can readily adapt to object pose tracking tasks, yielding comparable results to the current state-of-the-art baselines. | [
21152,
19482,
31562
] | Train |
43,837 | 27 | Title: Decentralized Social Navigation with Non-Cooperative Robots via Bi-Level Optimization
Abstract: This paper presents a fully decentralized approach for realtime non-cooperative multi-robot navigation in social mini-games, such as navigating through a narrow doorway or negotiating right of way at a corridor intersection. Our contribution is a new realtime bi-level optimization algorithm, in which the top-level optimization consists of computing a fair and collision-free ordering followed by the bottom-level optimization which plans optimal trajectories conditioned on the ordering. We show that, given such a priority order, we can impose simple kinodynamic constraints on each robot that are sufficient for it to plan collision-free trajectories with minimal deviation from their preferred velocities, similar to how humans navigate in these scenarios. We successfully deploy the proposed algorithm in the real world using F$1/10$ robots, a Clearpath Jackal, and a Boston Dynamics Spot as well as in simulation using the SocialGym 2.0 multi-agent social navigation simulator, in the doorway and corridor intersection scenarios. We compare with state-of-the-art social navigation methods using multi-agent reinforcement learning, collision avoidance algorithms, and crowd simulation models. We show that $(i)$ classical navigation performs $44\%$ better than the state-of-the-art learning-based social navigation algorithms, $(ii)$ without a scheduling protocol, our approach results in collisions in social mini-games $(iii)$ our approach yields $2\times$ and $5\times$ fewer velocity changes than CADRL in doorways and intersections, and finally $(iv)$ bi-level navigation in doorways at a flow rate of $2.8 - 3.3$ (ms)$^{-1}$ is comparable to flow rate in human navigation at a flow rate of $4$ (ms)$^{-1}$. | [
33142,
7582
] | Train |
43,838 | 30 | Title: Formal specification terminology for demographic agent-based models of fixed-step single-clocked simulations
Abstract: This document presents adequate formal terminology for the mathematical specification of a subset of Agent Based Models (ABMs) in the field of Demography. The simulation of the targeted ABMs follows a fixed-step single-clocked pattern. The proposed terminology further improves the model understanding and can act as a stand-alone methodology for the specification and optionally the documentation of a significant set of (demographic) ABMs. Nevertheless, it is imaginable the this terminology probably with further extensions can be merged with the largely-informal widely-used model documentation and communication O.D.D. protocol [Grimm and et al., 2020, Amouroux et al., 2010] to reduce many sources of ambiguity, hindering model replications by other modelers. A published demographic model documentation, largely simplified version of the Lone Parent Model [Gostoli and Silverman, 2020] is separately published in [Elsheikh, 2023b] as illustration for the formal terminology. The model was implemented in the Julia language [Elsheikh, 2023a] based on the Agents.jl julia package [Datseris et al., 2022]. | [
14430
] | Validation |
43,839 | 10 | Title: Self-Supervised Temporal Analysis of Spatiotemporal Data
Abstract: There exists a correlation between geospatial activity temporal patterns and type of land use. A novel self-supervised approach is proposed to stratify landscape based on mobility activity time series. First, the time series signal is transformed to the frequency domain and then compressed into task-agnostic temporal embeddings by a contractive autoencoder, which preserves cyclic temporal patterns observed in time series. The pixel-wise embeddings are converted to image-like channels that can be used for task-based, multimodal modeling of downstream geospatial tasks using deep semantic segmentation. Experiments show that temporal embeddings are semantically meaningful representations of time series data and are effective across different tasks such as classifying residential area and commercial areas. | [] | Train |
43,840 | 30 | Title: Detecting Reddit Users with Depression Using a Hybrid Neural Network
Abstract: Depression is a widespread mental health issue, affecting an estimated 3.8% of the global population. It is also one of the main contributors to disability worldwide. Recently it is becoming popular for individuals to use social media platforms (e.g., Reddit) to express their difficulties and health issues (e.g., depression) and seek support from other users in online communities. It opens great opportunities to automatically identify social media users with depression by parsing millions of posts for potential interventions. Deep learning methods have begun to dominate in the field of machine learning and natural language processing (NLP) because of their ease of use, efficient processing, and state-of-the-art results on many NLP tasks. In this work, we propose a hybrid deep learning model which combines a pretrained sentence BERT (SBERT) and convolutional neural network (CNN) to detect individuals with depression with their Reddit posts. The sentence BERT is used to learn the meaningful representation of semantic information in each post. CNN enables the further transformation of those embeddings and the temporal identification of behavioral patterns of users. We trained and evaluated the model performance to identify Reddit users with depression by utilizing the Self-reported Mental Health Diagnoses (SMHD) data. The hybrid deep learning model achieved an accuracy of 0.86 and an F1 score of 0.86 and outperformed the state-of-the-art documented result (F1 score of 0.79) by other machine learning models in the literature. The results show the feasibility of the hybrid model to identify individuals with depression. Although the hybrid model is validated to detect depression with Reddit posts, it can be easily tuned and applied to other text classification tasks and different clinical applications. | [] | Validation |
43,841 | 16 | Title: Deep Equilibrium Multimodal Fusion
Abstract: Multimodal fusion integrates the complementary information present in multiple modalities and has gained much attention recently. Most existing fusion approaches either learn a fixed fusion strategy during training and inference, or are only capable of fusing the information to a certain extent. Such solutions may fail to fully capture the dynamics of interactions across modalities especially when there are complex intra- and inter-modality correlations to be considered for informative multimodal fusion. In this paper, we propose a novel deep equilibrium (DEQ) method towards multimodal fusion via seeking a fixed point of the dynamic multimodal fusion process and modeling the feature correlations in an adaptive and recursive manner. This new way encodes the rich information within and across modalities thoroughly from low level to high level for efficacious downstream multimodal learning and is readily pluggable to various multimodal frameworks. Extensive experiments on BRCA, MM-IMDB, CMU-MOSI, SUN RGB-D, and VQA-v2 demonstrate the superiority of our DEQ fusion. More remarkably, DEQ fusion consistently achieves state-of-the-art performance on multiple multimodal benchmarks. The code will be released. | [] | Train |
43,842 | 28 | Title: Efficient Interpolation-Based Decoding of Reed-Solomon Codes
Abstract: We propose a new interpolation-based error decoding algorithm for (n,k) Reed-Solomon (RS) codes over a finite field of size q, where n = q − 1 is the length and k is the dimension. In particular, we employ the fast Fourier transform (FFT) together with properties of a circulant matrix associated with the error interpolation polynomial and some known results from elimination theory in the decoding process. The asymptotic computational complexity of the proposed algorithm for correcting any $t \leq \left\lfloor {\frac{{n - k}}{2}} \right\rfloor$ errors in an (n,k) RS code is of order ${\mathcal{O}}\left( {t{{\log }^2}t} \right)$ and ${\mathcal{O}}\left( {n{{\log }^2}n\log \log n} \right)$ over FFT-friendly and arbitrary finite fields, respectively, achieving the best currently known asymptotic decoding complexity, proposed for the same set of parameters. | [] | Train |
43,843 | 16 | Title: UTSGAN: Unseen Transition Suss GAN for Transition-Aware Image-to-image Translation
Abstract: In the field of Image-to-Image (I2I) translation, ensuring consistency between input images and their translated results is a key requirement for producing high-quality and desirable outputs. Previous I2I methods have relied on result consistency, which enforces consistency between the translated results and the ground truth output, to achieve this goal. However, result consistency is limited in its ability to handle complex and unseen attribute changes in translation tasks. To address this issue, we introduce a transition-aware approach to I2I translation, where the data translation mapping is explicitly parameterized with a transition variable, allowing for the modelling of unobserved translations triggered by unseen transitions. Furthermore, we propose the use of transition consistency, defined on the transition variable, to enable regularization of consistency on unobserved translations, which is omitted in previous works. Based on these insights, we present Unseen Transition Suss GAN (UTSGAN), a generative framework that constructs a manifold for the transition with a stochastic transition encoder and coherently regularizes and generalizes result consistency and transition consistency on both training and unobserved translations with tailor-designed constraints. Extensive experiments on four different I2I tasks performed on five different datasets demonstrate the efficacy of our proposed UTSGAN in performing consistent translations. | [] | Train |
43,844 | 16 | Title: Slovo: Russian Sign Language Dataset
Abstract: One of the main challenges of the sign language recognition task is the difficulty of collecting a suitable dataset due to the gap between hard-of-hearing and hearing societies. In addition, the sign language in each country differs significantly, which obliges the creation of new data for each of them. This paper presents the Russian Sign Language (RSL) video dataset Slovo, produced using crowdsourcing platforms. The dataset contains 20,000 FullHD recordings, divided into 1,000 classes of isolated RSL gestures received by 194 signers. We also provide the entire dataset creation pipeline, from data collection to video annotation, with the following demo application. Several neural networks are trained and evaluated on the Slovo to demonstrate its teaching ability. Proposed data and pre-trained models are publicly available. | [
8424
] | Train |
43,845 | 15 | Title: Statistical Hardware Design With Multi-model Active Learning
Abstract: With the rising complexity of numerous novel applications that serve our modern society comes the strong need to design efficient computing platforms. Designing efficient hardware is, however, a complex multi-objective problem that deals with multiple parameters and their interactions. Given that there are a large number of parameters and objectives involved in hardware design, synthesizing all possible combinations is not a feasible method to find the optimal solution. One promising approach to tackle this problem is statistical modeling of a desired hardware performance. Here, we propose a model-based active learning approach to solve this problem. Our proposed method uses Bayesian models to characterize various aspects of hardware performance. We also use transfer learning and Gaussian regression bootstrapping techniques in conjunction with active learning to create more accurate models. Our proposed statistical modeling method provides hardware models that are sufficiently accurate to perform design space exploration as well as performance prediction simultaneously. We use our proposed method to perform design space exploration and performance prediction for various hardware setups, such as micro-architecture design and OpenCL kernels for FPGA targets. Our experiments show that the number of samples required to create performance models significantly reduces while maintaining the predictive power of our proposed statistical models. For instance, in our performance prediction setting, the proposed method needs 65% fewer samples to create the model, and in the design space exploration setting, our proposed method can find the best parameter settings by exploring less than 50 samples. | [] | Train |
43,846 | 34 | Title: Advice Complexity bounds for Online Delayed F-Node-, H-Node- and H-Edge-Deletion Problems
Abstract: Let F be a fixed finite obstruction set of graphs and G be a graph revealed in an online fashion, node by node. The online Delayed F-Node-Deletion Problem (F-Edge-Deletion Problem}) is to keep G free of every H in F by deleting nodes (edges) until no induced subgraph isomorphic to any graph in F can be found in G. The task is to keep the number of deletions minimal. Advice complexity is a model in which an online algorithm has access to a binary tape of infinite length, on which an oracle can encode information to increase the performance of the algorithm. We are interested in the minimum number of advice bits that are necessary and sufficient to solve a deletion problem optimally. In this work, we first give essentially tight bounds on the advice complexity of the Delayed F-Node-Deletion Problem and F-Edge-Deletion Problem where F consists of a single, arbitrary graph H. We then show that the gadget used to prove these results can be utilized to give tight bounds in the case of node deletions if F consists of either only disconnected graphs or only connected graphs. Finally, we show that the number of advice bits that is necessary and sufficient to solve the general Delayed F-Node-Deletion Problem is heavily dependent on the obstruction set F. To this end, we provide sets for which this number is either constant, logarithmic or linear in the optimal number of deletions. | [] | Train |
43,847 | 30 | Title: Visual Writing Prompts: Character-Grounded Story Generation with Curated Image Sequences
Abstract: Current work on image-based story generation suffers from the fact that the existing image sequence collections do not have coherent plots behind them. We improve visual story generation by producing a new image-grounded dataset, Visual Writing Prompts (VWP). VWP contains almost 2K selected sequences of movie shots, each including 5-10 images. The image sequences are aligned with a total of 12K stories which were collected via crowdsourcing given the image sequences and a set of grounded characters from the corresponding image sequence. Our new image sequence collection and filtering process has allowed us to obtain stories that are more coherent, diverse, and visually grounded compared to previous work. We also propose a character-based story generation model driven by coherence as a strong baseline. Evaluations show that our generated stories are more coherent, visually grounded, and diverse than stories generated with the current state-of-the-art model. Our code, image features, annotations and collected stories are available at https://vwprompt.github.io/. | [
27973
] | Train |
43,848 | 4 | Title: A new color image secret sharing protocol
Abstract: Visual cryptography aims to protect images against their possible illegitimate use. Thus, one can cipher, hash, or add watermarks for protecting copyright, among others. In this paper we provide a new solution to the problem of secret sharing for the case when the secret is an image. Our method combines the Shamir scheme for secret sharing using finite fields of characteristic 2 with the CBC mode of operation of a secure symmetric cryptographic scheme like AES, so that the security relies on that of the mentioned techniques. The resulting shares have the same resolution as that of the original image. The idea of the method could be generalized to other multimedia formats like audio or video, adapting the method to the corresponding encoded information. | [] | Train |
43,849 | 34 | Title: Matching Statistics speed up BWT construction
Abstract: Due to the exponential growth of genomic data, constructing dedicated data structures has become the principal bottleneck in common bioinformatics applications. In particular, the Burrows-Wheeler Transform (BWT) is the basis of some of the most popular self-indexes for genomic data, due to its known favourable behaviour on repetitive data. Some tools that exploit the intrinsic repetitiveness of biological data have risen in popularity, due to their speed and low space consumption. We introduce a new algorithm for computing the BWT, which takes advantage of the redundancy of the data through a compressed version of matching statistics, the $\textit{CMS}$ of [Lipt\'ak et al., WABI 2022]. We show that it suffices to sort a small subset of suffixes, lowering both computation time and space. Our result is due to a new insight which links the so-called insert-heads of [Lipt\'ak et al., WABI 2022] to the well-known run boundaries of the BWT. We give two implementations of our algorithm, called $\texttt{CMS}$-$\texttt{BWT}$, both competitive in our experimental validation on highly repetitive real-life datasets. In most cases, they outperform other tools w.r.t. running time, trading off a higher memory footprint, which, however, is still considerably smaller than the total size of the input data. | [] | Train |
43,850 | 8 | Title: MonArch: Network Slice Monitoring Architecture for Cloud Native 5G Deployments
Abstract: Automated decision making algorithms are expected to play a key role in management and orchestration of network slices in 5G and beyond networks. State-of-the-art algorithms for automated orchestration and management tend to rely on data-driven methods which require a timely and accurate view of the network. Accurately monitoring an end-to-end (E2E) network slice requires a scalable monitoring architecture that facilitates collection and correlation of data from various network segments comprising the slice. The state-of-the-art on 5G monitoring mostly focuses on scalability, falling short in providing explicit support for network slicing and computing network slice key performance indicators (KPIs). To fill this gap, in this paper, we present MonArch, a scalable monitoring architecture for 5G, which focuses on network slice monitoring, slice KPI computation, and an application programming interface (API) for specifying slice monitoring requests. We validate the proposed architecture by implementing MonArch on a 5G testbed, and demonstrate its capability to compute a network slice KPI (e.g., slice throughput). Our evaluations show that MonArch does not significantly increase data ingestion time when scaling the number of slices and that a 5-second monitoring interval offers a good balance between monitoring overhead and accuracy. | [] | Train |
43,851 | 24 | Title: Dance Generation by Sound Symbolic Words
Abstract: This study introduces a novel approach to generate dance motions using onomatopoeia as input, with the aim of enhancing creativity and diversity in dance generation. Unlike text and music, onomatopoeia conveys rhythm and meaning through abstract word expressions without constraints on expression and without need for specialized knowledge. We adapt the AI Choreographer framework and employ the Sakamoto system, a feature extraction method for onomatopoeia focusing on phonemes and syllables. Additionally, we present a new dataset of 40 onomatopoeia-dance motion pairs collected through a user survey. Our results demonstrate that the proposed method enables more intuitive dance generation and can create dance motions using sound-symbolic words from a variety of languages, including those without onomatopoeia. This highlights the potential for diverse dance creation across different languages and cultures, accessible to a wider audience. Qualitative samples from our model can be found at: https://sites.google.com/view/onomatopoeia-dance/home/. | [] | Train |
43,852 | 27 | Title: Multimodal Grounding for Embodied AI via Augmented Reality Headsets for Natural Language Driven Task Planning
Abstract: Recent advances in generative modeling have spurred a resurgence in the field of Embodied Artificial Intelligence (EAI). EAI systems typically deploy large language models to physical systems capable of interacting with their environment. In our exploration of EAI for industrial domains, we successfully demonstrate the feasibility of co-located, human-robot teaming. Specifically, we construct an experiment where an Augmented Reality (AR) headset mediates information exchange between an EAI agent and human operator for a variety of inspection tasks. To our knowledge the use of an AR headset for multimodal grounding and the application of EAI to industrial tasks are novel contributions within Embodied AI research. In addition, we highlight potential pitfalls in EAI's construction by providing quantitative and qualitative analysis on prompt robustness. | [
8084,
295
] | Test |
43,853 | 24 | Title: FedACK: Federated Adversarial Contrastive Knowledge Distillation for Cross-Lingual and Cross-Model Social Bot Detection
Abstract: Social bot detection is of paramount importance to the resilience and security of online social platforms. The state-of-the-art detection models are siloed and have largely overlooked a variety of data characteristics from multiple cross-lingual platforms. Meanwhile, the heterogeneity of data distribution and model architecture make it intricate to devise an efficient cross-platform and cross-model detection framework. In this paper, we propose FedACK, a new federated adversarial contrastive knowledge distillation framework for social bot detection. We devise a GAN-based federated knowledge distillation mechanism for efficiently transferring knowledge of data distribution among clients. In particular, a global generator is used to extract the knowledge of global data distribution and distill it into each client’s local model. We leverage local discriminator to enable customized model design and use local generator for data enhancement with hard-to-decide samples. Local training is conducted as multi-stage adversarial and contrastive learning to enable consistent feature spaces among clients and to constrain the optimization direction of local models, reducing the divergences between local and global models. Experiments demonstrate that FedACK outperforms the state-of-the-art approaches in terms of accuracy, communication efficiency, and feature space consistency. | [
4557
] | Train |
43,854 | 18 | Title: Graphene-based RRAM devices for neural computing
Abstract: Resistive random access memory (RRAM) is very well known for its potential application in in-memory and neural computing. However, they often have different types of device-to-device and cycle-to-cycle variability. This makes it harder to build highly accurate crossbar arrays.Traditional RRAM designs make use of various filament-based oxide materials for creating a channel which is sandwiched between two electrodes to form a two-terminal structure. They are often subjected to mechanical and electrical stress over repeated read-and-write cycles. The behavior of these devices often varies in practice across wafer arrays over these stress when fabricated. The use of emerging 2D materials is explored to improve electrical endurance, long retention In review time, high switching speed, and fewer power losses. This study provides an in-depth exploration of neuro-memristive computing and its potential applications, focusing specifically on the utilization of graphene and 2D materials in resistive random-access memory (RRAM) for neural computing. The paper presents a comprehensive analysis of the structural and design aspects of graphene-based RRAM, along with a thorough examination of commercially available RRAM models and their fabrication techniques. Furthermore, the study investigates the diverse range of applications that can benefit from graphene-based RRAM devices. | [] | Train |
43,855 | 30 | Title: An algebraic approach to translating Japanese
Abstract: We use Lambek’s pregroups and the framework of compositional distributional models of language (“DisCoCat”) to study translations from Japanese to English as pairs of functors. Adding decorations to pregroups, we show how to handle word order changes between languages. | [] | Validation |
43,856 | 24 | Title: On Data Imbalance in Molecular Property Prediction with Pre-training
Abstract: Revealing and analyzing the various properties of materials is an essential and critical issue in the development of materials, including batteries, semiconductors, catalysts, and pharmaceuticals. Traditionally, these properties have been determined through theoretical calculations and simulations. However, it is not practical to perform such calculations on every single candidate material. Recently, a combination method of the theoretical calculation and machine learning has emerged, that involves training machine learning models on a subset of theoretical calculation results to construct a surrogate model that can be applied to the remaining materials. On the other hand, a technique called pre-training is used to improve the accuracy of machine learning models. Pre-training involves training the model on pretext task, which is different from the target task, before training the model on the target task. This process aims to extract the input data features, stabilizing the learning process and improving its accuracy. However, in the case of molecular property prediction, there is a strong imbalance in the distribution of input data and features, which may lead to biased learning towards frequently occurring data during pre-training. In this study, we propose an effective pre-training method that addresses the imbalance in input data. We aim to improve the final accuracy by modifying the loss function of the existing representative pre-training method, node masking, to compensate the imbalance. We have investigated and assessed the impact of our proposed imbalance compensation on pre-training and the final prediction accuracy through experiments and evaluations using benchmark of molecular property prediction models. | [] | Validation |
43,857 | 2 | Title: On Model-Checking Higher-Order Effectful Programs (Long Version)
Abstract: Model-checking is one of the most powerful techniques for verifying systems and programs, which since the pioneering results by Knapik et al., Ong, and Kobayashi, is known to be applicable to functional programs with higher-order types against properties expressed by formulas of monadic second-order logic. What happens when the program in question, in addition to higher-order functions, also exhibits algebraic effects such as probabilistic choice or global store? The results in the literature range from those, mostly positive, about nondeterministic effects, to those about probabilistic effects, in the presence of which even mere reachability becomes undecidable. This work takes a fresh and general look at the problem, first of all showing that there is an elegant and natural way of viewing higher-order programs producing algebraic effects as ordinary higher-order recursion schemes. We then move on to consider effect handlers, showing that in their presence the model checking problem is bound to be undecidable in the general case, while it stays decidable when handlers have a simple syntactic form, still sufficient to capture so-called generic effects. Along the way we hint at how a general specification language could look like, this way justifying some of the results in the literature, and deriving new ones. | [] | Train |
43,858 | 16 | Title: ConSlide: Asynchronous Hierarchical Interaction Transformer with Breakup-Reorganize Rehearsal for Continual Whole Slide Image Analysis
Abstract: Whole slide image (WSI) analysis has become increasingly important in the medical imaging community, enabling automated and objective diagnosis, prognosis, and therapeutic-response prediction. However, in clinical practice, the ever-evolving environment hamper the utility of WSI analysis models. In this paper, we propose the FIRST continual learning framework for WSI analysis, named ConSlide, to tackle the challenges of enormous image size, utilization of hierarchical structure, and catastrophic forgetting by progressive model updating on multiple sequential datasets. Our framework contains three key components. The Hierarchical Interaction Transformer (HIT) is proposed to model and utilize the hierarchical structural knowledge of WSI. The Breakup-Reorganize (BuRo) rehearsal method is developed for WSI data replay with efficient region storing buffer and WSI reorganizing operation. The asynchronous updating mechanism is devised to encourage the network to learn generic and specific knowledge respectively during the replay stage, based on a nested cross-scale similarity learning (CSSL) module. We evaluated the proposed ConSlide on four public WSI datasets from TCGA projects. It performs best over other state-of-the-art methods with a fair WSI-based continual learning setting and achieves a better trade-off of the overall performance and forgetting on previous task | [] | Train |
43,859 | 31 | Title: Reciprocal Sequential Recommendation
Abstract: Reciprocal recommender system (RRS), considering a two-way matching between two parties, has been widely applied in online platforms like online dating and recruitment. Existing RRS models mainly capture static user preferences, which have neglected the evolving user tastes and the dynamic matching relation between the two parties. Although dynamic user modeling has been well-studied in sequential recommender systems, existing solutions are developed in a user-oriented manner. Therefore, it is non-trivial to adapt sequential recommendation algorithms to reciprocal recommendation. In this paper, we formulate RRS as a distinctive sequence matching task, and further propose a new approach ReSeq for RRS, which is short for Reciprocal Sequential recommendation. To capture dual-perspective matching, we propose to learn fine-grained sequence similarities by co-attention mechanism across different time steps. Further, to improve the inference efficiency, we introduce the self-distillation technique to distill knowledge from the fine-grained matching module into the more efficient student module. In the deployment stage, only the efficient student module is used, greatly speeding up the similarity computation. Extensive experiments on five real-world datasets from two scenarios demonstrate the effectiveness and efficiency of the proposed method. Our code is available at https://github.com/RUCAIBox/ReSeq/. | [] | Validation |
43,860 | 16 | Title: The impact of individual information exchange strategies on the distribution of social wealth
Abstract: Wealth distribution is a complex and critical aspect of any society. Information exchange is considered to have played a role in shaping wealth distribution patterns, but the specific dynamic mechanism is still unclear. In this research, we used simulation-based methods to investigate the impact of different modes of information exchange on wealth distribution. We compared different combinations of information exchange strategies and moving strategies, analyzed their impact on wealth distribution using classic wealth distribution indicators such as the Gini coefficient. Our findings suggest that information exchange strategies have significant impact on wealth distribution and that promoting more equitable access to information and resources is crucial in building a just and equitable society for all. | [] | Train |
43,861 | 37 | Title: IWEK: An Interpretable What-If Estimator for Database Knobs
Abstract: The knobs of modern database management systems have significant impact on the performance of the systems. With the development of cloud databases, an estimation service for knobs is urgently needed to improve the performance of database. Unfortunately, few attentions have been paid to estimate the performance of certain knob configurations. To fill this gap, we propose IWEK, an interpretable&transferable what-if estimator for database knobs. To achieve interpretable estimation, we propose linear estimator based on the random forest for database knobs for the explicit and trustable evaluation results. Due to its interpretability, our estimator capture the direct relationships between knob configuration and its performance, to guarantee the high availability of database. We design a two-stage transfer algorithm to leverage historical experiences to efficiently build the knob estimator for new scenarios. Due to its lightweight design, our method can largely reduce the overhead of collecting training data and could achieve cold start knob estimation for new scenarios. Extensive experiments on YCSB and TPCC show that our method performs well in interpretable and transferable knob estimation with limited training data. Further, our method could achieve efficient estimator transfer with only 10 samples in TPCC and YSCB. | [] | Test |
43,862 | 4 | Title: RAPTOR: Advanced Persistent Threat Detection in Industrial IoT via Attack Stage Correlation
Abstract: IIoT (Industrial Internet-of-Things) systems are getting more prone to attacks by APT (Advanced Persistent Threat) adversaries. Past APT attacks on IIoT systems such as the 2016 Ukrainian power grid attack which cut off the capital Kyiv off power for an hour and the 2017 Saudi petrochemical plant attack which almost shut down the plant's safety controllers have shown that APT campaigns can disrupt industrial processes, shut down critical systems and endanger human lives. In this work, we propose RAPTOR, a system to detect APT campaigns in IIoT environments. RAPTOR detects and correlates various APT attack stages (adapted to IIoT) using multiple data sources. Subsequently, it constructs a high-level APT campaign graph which can be used by cybersecurity analysts towards attack analysis and mitigation. A performance evaluation of RAPTOR's APT stage detection stages shows high precision and low false positive/negative rates. We also show that RAPTOR is able to construct the APT campaign graph for APT attacks (modelled after real-world attacks on ICS/OT infrastructure) executed on our IIoT testbed. | [] | Train |
43,863 | 24 | Title: Constrained Exploration in Reinforcement Learning with Optimality Preservation
Abstract: We consider a class of reinforcement-learning systems in which the agent follows a behavior policy to explore a discrete state-action space to find an optimal policy while adhering to some restriction on its behavior. Such restriction may prevent the agent from visiting some state-action pairs, possibly leading to the agent finding only a sub-optimal policy. To address this problem we introduce the concept of constrained exploration with optimality preservation, whereby the exploration behavior of the agent is constrained to meet a specification while the optimality of the (original) unconstrained learning process is preserved. We first establish a feedback-control structure that models the dynamics of the unconstrained learning process. We then extend this structure by adding a supervisor to ensure that the behavior of the agent meets the specification, and establish (for a class of reinforcement-learning problems with a known deterministic environment) a necessary and sufficient condition under which optimality is preserved. This work demonstrates the utility and the prospect of studying reinforcement-learning problems in the context of the theories of discrete-event systems, automata and formal languages. | [] | Train |
43,864 | 16 | Title: Do-GOOD: Towards Distribution Shift Evaluation for Pre-Trained Visual Document Understanding Models
Abstract: Numerous pre-training techniques for visual document understanding (VDU) have recently shown substantial improvements in performance across a wide range of document tasks. However, these pre-trained VDU models cannot guarantee continued success when the distribution of test data differs from the distribution of training data. In this paper, to investigate how robust existing pre-trained VDU models are to various distribution shifts, we first develop an out-of-distribution (OOD) benchmark termed Do-GOOD for the fine-Grained analysis on Document image-related tasks specifically. The Do-GOOD benchmark defines the underlying mechanisms that result in different distribution shifts and contains 9 OOD datasets covering 3 VDU related tasks, e.g., document information extraction, classification and question answering. We then evaluate the robustness and perform a fine-grained analysis of 5 latest VDU pre-trained models and 2 typical OOD generalization algorithms on these OOD datasets. Results from the experiments demonstrate that there is a significant performance gap between the in-distribution (ID) and OOD settings for document images, and that fine-grained analysis of distribution shifts can reveal the brittle nature of existing pre-trained VDU models and OOD generalization algorithms. The code and datasets for our Do-GOOD benchmark can be found at https://github.com/MAEHCM/Do-GOOD. | [] | Test |
43,865 | 24 | Title: Selective Guidance: Are All the Denoising Steps of Guided Diffusion Important?
Abstract: This study examines the impact of optimizing the Stable Diffusion (SD) guided inference pipeline. We propose optimizing certain denoising steps by limiting the noise computation to conditional noise and eliminating unconditional noise computation, thereby reducing the complexity of the target iterations by 50%. Additionally, we demonstrate that later iterations of the SD are less sensitive to optimization, making them ideal candidates for applying the suggested optimization. Our experiments show that optimizing the last 20% of the denoising loop iterations results in an 8.2% reduction in inference time with almost no perceivable changes to the human eye. Furthermore, we found that by extending the optimization to 50% of the last iterations, we can reduce inference time by approximately 20.3%, while still generating visually pleasing images. | [
8308
] | Train |
43,866 | 30 | Title: Revealing the Blind Spot of Sentence Encoder Evaluation by HEROS
Abstract: Existing sentence textual similarity benchmark datasets only use a single number to summarize how similar the sentence encoder's decision is to humans'. However, it is unclear what kind of sentence pairs a sentence encoder (SE) would consider similar. Moreover, existing SE benchmarks mainly consider sentence pairs with low lexical overlap, so it is unclear how the SEs behave when two sentences have high lexical overlap. We introduce a high-quality SE diagnostic dataset, HEROS. HEROS is constructed by transforming an original sentence into a new sentence based on certain rules to form a \textit{minimal pair}, and the minimal pair has high lexical overlaps. The rules include replacing a word with a synonym, an antonym, a typo, a random word, and converting the original sentence into its negation. Different rules yield different subsets of HEROS. By systematically comparing the performance of over 60 supervised and unsupervised SEs on HEROS, we reveal that most unsupervised sentence encoders are insensitive to negation. We find the datasets used to train the SE are the main determinants of what kind of sentence pairs an SE considers similar. We also show that even if two SEs have similar performance on STS benchmarks, they can have very different behavior on HEROS. Our result reveals the blind spot of traditional STS benchmarks when evaluating SEs. | [] | Train |
43,867 | 16 | Title: Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models
Abstract: Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt. While revolutionary, current state-of-the-art diffusion models may still fail in generating images that fully convey the semantics in the given text prompt. We analyze the publicly available Stable Diffusion model and assess the existence of catastrophic neglect, where the model fails to generate one or more of the subjects from the input prompt. Moreover, we find that in some cases the model also fails to correctly bind attributes (e.g., colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of Generative Semantic Nursing (GSN), where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. Using an attention-based formulation of GSN, dubbed Attend-and-Excite, we guide the model to refine the cross-attention units to attend to all subject tokens in the text prompt and strengthen --- or excite --- their activations, encouraging the model to generate all subjects described in the text prompt. We compare our approach to alternative approaches and demonstrate that it conveys the desired concepts more faithfully across a range of text prompts. Code is available at our project page: https://attendandexcite.github.io/Attend-and-Excite/. | [
37126,
43911,
25739,
30737,
41874,
7955,
35348,
21014,
30360,
24986,
15132,
15901,
25117,
20514,
2211,
37283,
45477,
31913,
7850,
1836,
18220,
39603,
40632,
24763,
18240,
25666,
36168,
7505,
23677,
44757,
1244,
34141,
14047,
6880,
20451,
23021,
... | Train |
43,868 | 24 | Title: Proposing Novel Extrapolative Compounds by Nested Variational Autoencoders
Abstract: Materials informatics (MI), which uses artificial intelligence and data analysis techniques to improve the efficiency of materials development, is attracting increasing interest from industry. One of its main applications is the rapid development of new high-performance compounds. Recently, several deep generative models have been proposed to suggest candidate compounds that are expected to satisfy the desired performance. However, they usually have the problem of requiring a large amount of experimental datasets for training to achieve sufficient accuracy. In actual cases, it is often possible to accumulate only about 1000 experimental data at most. Therefore, the authors proposed a deep generative model with nested two variational autoencoders (VAEs). The outer VAE learns the structural features of compounds using large-scale public data, while the inner VAE learns the relationship between the latent variables of the outer VAE and the properties from small-scale experimental data. To generate high performance compounds beyond the range of the training data, the authors also proposed a loss function that amplifies the correlation between a component of latent variables of the inner VAE and material properties. The results indicated that this loss function contributes to improve the probability of generating high-performance candidates. Furthermore, as a result of verification test with an actual customer in chemical industry, it was confirmed that the proposed method is effective in reducing the number of experiments to $1/4$ compared to a conventional method. | [] | Validation |
43,869 | 16 | Title: GlueStick: Robust Image Matching by Sticking Points and Lines Together
Abstract: Line segments are powerful features complementary to points. They offer structural cues, robust to drastic viewpoint and illumination changes, and can be present even in texture-less areas. However, describing and matching them is more challenging compared to points due to partial occlusions, lack of texture, or repetitiveness. This paper introduces a new matching paradigm, where points, lines, and their descriptors are unified into a single wireframe structure. We propose GlueStick, a deep matching Graph Neural Network (GNN) that takes two wireframes from different images and leverages the connectivity information between nodes to better glue them together. In addition to the increased efficiency brought by the joint matching, we also demonstrate a large boost of performance when leveraging the complementary nature of these two features in a single architecture. We show that our matching strategy outperforms the state-of-the-art approaches independently matching line segments and points for a wide variety of datasets and tasks. The code is available at https://github.com/cvg/GlueStick. | [
13849
] | Train |
43,870 | 6 | Title: Lessons Learnt from a Multimodal Learning Analytics Deployment In-the-wild
Abstract: Multimodal Learning Analytics (MMLA) innovations make use of rapidly evolving sensing and artificial intelligence algorithms to collect rich data about learning activities that unfold in physical spaces. The analysis of these data is opening exciting new avenues for both studying and supporting learning. Yet, practical and logistical challenges commonly appear while deploying MMLA innovations ”in-the-wild”. These can span from technical issues related to enhancing the learning space with sensing capabilities, to the increased complexity of teachers’ tasks. These practicalities have been rarely investigated. This paper addresses this gap by presenting a set of lessons learnt from a 2-year human-centred MMLA in-the-wild study conducted with 399 students and 17 educators in the context of nursing education. The lessons learnt were synthesised into topics related to i) technological/physical aspects of the deployment; ii) multimodal data and interfaces; iii) the design process; iv) participation, ethics and privacy; and v) sustainability of the deployment. | [] | Test |
43,871 | 4 | Title: Introducing Packet-Level Analysis in Programmable Data Planes to Advance Network Intrusion Detection
Abstract: Programmable data planes offer precise control over the low-level processing steps applied to network packets, serving as a valuable tool for analysing malicious flows in the field of intrusion detection. Albeit with limitations on physical resources and capabilities, they allow for the efficient extraction of detailed traffic information, which can then be utilised by Machine Learning (ML) algorithms responsible for identifying security threats. In addressing resource constraints, existing solutions in the literature rely on compressing network data through the collection of statistical traffic features in the data plane. While this compression saves memory resources in switches and minimises the burden on the control channel between the data and the control plane, it also results in a loss of information available to the Network Intrusion Detection System (NIDS), limiting access to packet payload, categorical features, and the semantic understanding of network communications, such as the behaviour of packets within traffic flows. This paper proposes P4DDLe, a framework that exploits the flexibility of P4-based programmable data planes for packet-level feature extraction and pre-processing. P4DDLe leverages the programmable data plane to extract raw packet features from the network traffic, categorical features included, and to organise them in a way that the semantics of traffic flows is preserved. To minimise memory and control channel overheads, P4DDLe selectively processes and filters packet-level data, so that all and only the relevant features required by the NIDS are collected. The experimental evaluation with recent Distributed Denial of Service (DDoS) attack data demonstrates that the proposed approach is very efficient in collecting compact and high-quality representations of network flows, ensuring precise detection of DDoS attacks. | [] | Validation |
43,872 | 24 | Title: Faster Discrete Convex Function Minimization with Predictions: The M-Convex Case
Abstract: Recent years have seen a growing interest in accelerating optimization algorithms with machine-learned predictions. Sakaue and Oki (NeurIPS 2022) have developed a general framework that warm-starts the L-convex function minimization method with predictions, revealing the idea's usefulness for various discrete optimization problems. In this paper, we present a framework for using predictions to accelerate M-convex function minimization, thus complementing previous research and extending the range of discrete optimization algorithms that can benefit from predictions. Our framework is particularly effective for an important subclass called laminar convex minimization, which appears in many operations research applications. Our methods can improve time complexity bounds upon the best worst-case results by using predictions and even have potential to go beyond a lower-bound result. | [
23586,
34604
] | Test |
43,873 | 24 | Title: Neural Continuous-Discrete State Space Models for Irregularly-Sampled Time Series
Abstract: Learning accurate predictive models of real-world dynamic phenomena (e.g., climate, biological) remains a challenging task. One key issue is that the data generated by both natural and artificial processes often comprise time series that are irregularly sampled and/or contain missing observations. In this work, we propose the Neural Continuous-Discrete State Space Model (NCDSSM) for continuous-time modeling of time series through discrete-time observations. NCDSSM employs auxiliary variables to disentangle recognition from dynamics, thus requiring amortized inference only for the auxiliary variables. Leveraging techniques from continuous-discrete filtering theory, we demonstrate how to perform accurate Bayesian inference for the dynamic states. We propose three flexible parameterizations of the latent dynamics and an efficient training objective that marginalizes the dynamic states during inference. Empirical results on multiple benchmark datasets across various domains show improved imputation and forecasting performance of NCDSSM over existing models. | [
19704
] | Train |
43,874 | 24 | Title: Anomaly Detection Using One-Class SVM for Logs of Juniper Router Devices
Abstract: nan | [] | Train |
43,875 | 16 | Title: TransHuman: A Transformer-based Human Representation for Generalizable Neural Human Rendering
Abstract: In this paper, we focus on the task of generalizable neural human rendering which trains conditional Neural Radiance Fields (NeRF) from multi-view videos of different characters. To handle the dynamic human motion, previous methods have primarily used a SparseConvNet (SPC)-based human representation to process the painted SMPL. However, such SPC-based representation i) optimizes under the volatile observation space which leads to the pose-misalignment between training and inference stages, and ii) lacks the global relationships among human parts that is critical for handling the incomplete painted SMPL. Tackling these issues, we present a brand-new framework named TransHuman, which learns the painted SMPL under the canonical space and captures the global relationships between human parts with transformers. Specifically, TransHuman is mainly composed of Transformer-based Human Encoding (TransHE), Deformable Partial Radiance Fields (DPaRF), and Fine-grained Detail Integration (FDI). TransHE first processes the painted SMPL under the canonical space via transformers for capturing the global relationships between human parts. Then, DPaRF binds each output token with a deformable radiance field for encoding the query point under the observation space. Finally, the FDI is employed to further integrate fine-grained information from reference images. Extensive experiments on ZJU-MoCap and H36M show that our TransHuman achieves a significantly new state-of-the-art performance with high efficiency. Project page: https://pansanity666.github.io/TransHuman/ | [
24802
] | Train |
43,876 | 6 | Title: A Matter of Annotation: An Empirical Study on In Situ and Self-Recall Activity Annotations from Wearable Sensors
Abstract: Research into the detection of human activities from wearable sensors is a highly active field, benefiting numerous applications, from ambulatory monitoring of healthcare patients via fitness coaching to streamlining manual work processes. We present an empirical study that compares 4 different commonly used annotation methods utilized in user studies that focus on in-the-wild data. These methods can be grouped in user-driven, in situ annotations - which are performed before or during the activity is recorded - and recall methods - where participants annotate their data in hindsight at the end of the day. Our study illustrates that different labeling methodologies directly impact the annotations' quality, as well as the capabilities of a deep learning classifier trained with the data respectively. We noticed that in situ methods produce less but more precise labels than recall methods. Furthermore, we combined an activity diary with a visualization tool that enables the participant to inspect and label their activity data. Due to the introduction of such a tool were able to decrease missing annotations and increase the annotation consistency, and therefore the F1-score of the deep learning model by up to 8% (ranging between 82.1 and 90.4% F1-score). Furthermore, we discuss the advantages and disadvantages of the methods compared in our study, the biases they may could introduce and the consequences of their usage on human activity recognition studies and as well as possible solutions. | [] | Train |
43,877 | 4 | Title: Exploring Blockchain Technology through a Modular Lens: A Survey
Abstract: Blockchain has attracted significant attention in recent years due to its potential to revolutionize various industries by providing trustlessness. To comprehensively examine blockchain systems, this article presents both a macro-level overview on the most popular blockchain systems, and a micro-level analysis on a general blockchain framework and its crucial components. The macro-level exploration provides a big picture on the endeavors made by blockchain professionals over the years to enhance the blockchain performance while the micro-level investigation details the blockchain building blocks for deep technology comprehension. More specifically, this article introduces a general modular blockchain analytic framework that decomposes a blockchain system into interacting modules and then examines the major modules to cover the essential blockchain components of network, consensus, and distributed ledger at the micro-level. The framework as well as the modular analysis jointly build a foundation for designing scalable, flexible, and application-adaptive blockchains that can meet diverse requirements. Additionally, this article explores popular technologies that can be integrated with blockchain to expand functionality and highlights major challenges. Such a study provides critical insights to overcome the obstacles in designing novel blockchain systems and facilitates the further development of blockchain as a digital infrastructure to service new applications. | [
10482
] | Validation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.