node_id int64 0 76.9k | label int64 0 39 | text stringlengths 13 124k | neighbors listlengths 0 3.32k | mask stringclasses 4
values |
|---|---|---|---|---|
43,678 | 5 | Title: HitGNN: High-throughput GNN Training Framework on CPU+Multi-FPGA Heterogeneous Platform
Abstract: As the size of real-world graphs increases, training Graph Neural Networks (GNNs) has become time-consuming and requires acceleration. While previous works have demonstrated the potential of utilizing FPGA for accelerating GNN training, few works have been carried out to accelerate GNN training with multiple FPGAs due to the necessity of hardware expertise and substantial development effort. To this end, we propose HitGNN, a framework that enables users to effortlessly map GNN training workloads onto a CPU-Multi-FPGA platform for acceleration. In particular, HitGNN takes the user-defined synchronous GNN training algorithm, GNN model, and platform metadata as input, determines the design parameters based on the platform metadata, and performs hardware mapping onto the CPU+Multi-FPGA platform, automatically. HitGNN consists of the following building blocks: (1) high-level application programming interfaces (APIs) that allow users to specify various synchronous GNN training algorithms and GNN models with only a handful of lines of code; (2) a software generator that generates a host program that performs mini-batch sampling, manages CPU-FPGA communication, and handles workload balancing among the FPGAs; (3) an accelerator generator that generates GNN kernels with optimized datapath and memory organization. We show that existing synchronous GNN training algorithms such as DistDGL and PaGraph can be easily deployed on a CPU+Multi-FPGA platform using our framework, while achieving high training throughput. Compared with the state-of-the-art frameworks that accelerate synchronous GNN training on a multi-GPU platform, HitGNN achieves up to 27.21x bandwidth efficiency, and up to 4.26x speedup using much less compute power and memory bandwidth than GPUs. In addition, HitGNN demonstrates good scalability to 16 FPGAs on a CPU+Multi-FPGA platform. | [] | Validation |
43,679 | 2 | Title: On the Lattice of Program Metrics
Abstract: In this paper we are concerned with understanding the nature of program metrics for calculi with higher-order types, seen as natural generalizations of program equivalences. Some of the metrics we are interested in are well-known, such as those based on the interpretation of terms in metric spaces and those obtained by generalizing observational equivalence. We also introduce a new one, called the interactive metric, built by applying the well-known Int-Construction to the category of metric complete partial orders. Our aim is then to understand how these metrics relate to each other, i.e., whether and in which cases one such metric refines another, in analogy with corresponding well-studied problems about program equivalences. The results we obtain are twofold. We first show that the metrics of semantic origin, i.e., the denotational and interactive ones, lie \emph{in between} the observational and equational metrics and that in some cases, these inclusions are strict. Then, we give a result about the relationship between the denotational and interactive metrics, revealing that the former is less discriminating than the latter. All our results are given for a linear lambda-calculus, and some of them can be generalized to calculi with graded comonads, in the style of Fuzz. | [] | Validation |
43,680 | 24 | Title: Concept-Monitor: Understanding DNN training through individual neurons
Abstract: In this work, we propose a general framework called Concept-Monitor to help demystify the black-box DNN training processes automatically using a novel unified embedding space and concept diversity metric. Concept-Monitor enables human-interpretable visualization and indicators of the DNN training processes and facilitates transparency as well as deeper understanding on how DNNs develop along the during training. Inspired by these findings, we also propose a new training regularizer that incentivizes hidden neurons to learn diverse concepts, which we show to improve training performance. Finally, we apply Concept-Monitor to conduct several case studies on different training paradigms including adversarial training, fine-tuning and network pruning via the Lottery Ticket Hypothesis | [
39217
] | Test |
43,681 | 5 | Title: PSI/J: A Portable Interface for Submitting, Monitoring, and Managing Jobs
Abstract: It is generally desirable for high-performance computing (HPC) applications to be portable between HPC systems, for example to make use of more performant hardware, make effective use of allocations, and to co-locate compute jobs with large datasets. Unfortunately, moving scientific applications between HPC systems is challenging for various reasons, most notably that HPC systems have different HPC schedulers. We introduce PSI/J, a job management abstraction API intended to simplify the construction of software components and applications that are portable over various HPC scheduler implementations. We argue that such a system is both necessary and that no viable alternative currently exists. We analyze similar notable APIs and attempt to determine the factors that influenced their evolution and adoption by the HPC community. We base the design of PSI/J on that analysis. We describe how PSI/J has been integrated in three workflow systems and one application, and also show via experiments that PSI/J imposes minimal overhead. | [
2728,
45790
] | Train |
43,682 | 30 | Title: Large Language Model Displays Emergent Ability to Interpret Novel Literary Metaphors
Abstract: Recent advances in the performance of large language models (LLMs) have sparked debate over whether, given sufficient training, high-level human abilities emerge in such generic forms of artificial intelligence (AI). Despite the exceptional performance of LLMs on a wide range of tasks involving natural language processing and reasoning, there has been sharp disagreement as to whether their abilities extend to more creative human abilities. A core example is the ability to interpret novel metaphors. Given the enormous and non-curated text corpora used to train LLMs, a serious obstacle to designing tests is the requirement of finding novel yet high-quality metaphors that are unlikely to have been included in the training data. Here we assessed the ability of GPT-4, a state-of-the-art large language model, to provide natural-language interpretations of novel literary metaphors drawn from Serbian poetry and translated into English. Despite exhibiting no signs of having been exposed to these metaphors previously, the AI system consistently produced detailed and incisive interpretations. Human judge - blind to the fact that an AI model was involved - rated metaphor interpretations generated by GPT-4 as superior to those provided by a group of college students. In interpreting reversed metaphors, GPT-4, as well as humans, exhibited signs of sensitivity to the Gricean cooperative principle. These results indicate that LLMs such as GPT-4 have acquired an emergent ability to interpret complex novel metaphors. | [
19720,
13510
] | Validation |
43,683 | 24 | Title: Conformal prediction under ambiguous ground truth
Abstract: In safety-critical classification tasks, conformal prediction allows to perform rigorous uncertainty quantification by providing confidence sets including the true class with a user-specified probability. This generally assumes the availability of a held-out calibration set with access to ground truth labels. Unfortunately, in many domains, such labels are difficult to obtain and usually approximated by aggregating expert opinions. In fact, this holds true for almost all datasets, including well-known ones such as CIFAR and ImageNet. Applying conformal prediction using such labels underestimates uncertainty. Indeed, when expert opinions are not resolvable, there is inherent ambiguity present in the labels. That is, we do not have ``crisp'', definitive ground truth labels and this uncertainty should be taken into account during calibration. In this paper, we develop a conformal prediction framework for such ambiguous ground truth settings which relies on an approximation of the underlying posterior distribution of labels given inputs. We demonstrate our methodology on synthetic and real datasets, including a case study of skin condition classification in dermatology. | [
32838
] | Validation |
43,684 | 28 | Title: New Codes on High Dimensional Expanders
Abstract: We describe a new parameterized family of symmetric error-correcting codes with low-density parity-check matrices (LDPC). Our codes can be described in two seemingly different ways. First, in relation to Reed-Muller codes: our codes are functions on a subset of $\mathbb{F}^n$ whose restrictions to a prescribed set of affine lines has low degree. Alternatively, they are Tanner codes on high dimensional expanders, where the coordinates of the codeword correspond to triangles of a $2$-dimensional expander, such that around every edge the local view forms a Reed-Solomon codeword. For some range of parameters our codes are provably locally testable, and their dimension is some fixed power of the block length. For another range of parameters our codes have distance and dimension that are both linear in the block length, but we do not know if they are locally testable. The codes also have the multiplication property: the coordinate-wise product of two codewords is a codeword in a related code. The definition of the codes relies on the construction of a specific family of simplicial complexes which is a slight variant on the coset complexes of Kaufman and Oppenheim. We show a novel way to embed the triangles of these complexes into $\mathbb{F}^n$, with the property that links of edges embed as affine lines in $\mathbb{F}^n$. We rely on this embedding to lower bound the rate of these codes in a way that avoids constraint-counting and thereby achieves non-trivial rate even when the local codes themselves have arbitrarily small rate, and in particular below $1/2$. | [] | Train |
43,685 | 34 | Title: Hutchinson's Estimator is Bad at Kronecker-Trace-Estimation
Abstract: We study the problem of estimating the trace of a matrix $\mathbf{A}$ that can only be accessed through Kronecker-matrix-vector products. That is, for any Kronecker-structured vector $\boldsymbol{\mathrm{x}} = \otimes_{i=1}^k \boldsymbol{\mathrm{x}}_i$, we can compute $\mathbf{A}\boldsymbol{\mathrm{x}}$. We focus on the natural generalization of Hutchinson's Estimator to this setting, proving tight rates for the number of matrix-vector products this estimator needs to find a $(1\pm\varepsilon)$ approximation to the trace of $\mathbf{A}$. We find an exact equation for the variance of the estimator when using a Kronecker of Gaussian vectors, revealing an intimate relationship between Hutchinson's Estimator, the partial trace operator, and the partial transpose operator. Using this equation, we show that when using real vectors, in the worst case, this estimator needs $O(\frac{3^k}{\varepsilon^2})$ products to recover a $(1\pm\varepsilon)$ approximation of the trace of any PSD $\mathbf{A}$, and a matching lower bound for certain PSD $\mathbf{A}$. However, when using complex vectors, this can be exponentially improved to $\Theta(\frac{2^k}{\varepsilon^2})$. We show that Hutchinson's Estimator converges slowest when $\mathbf{A}$ itself also has Kronecker structure. We conclude with some theoretical evidence suggesting that, by combining Hutchinson's Estimator with other techniques, it may be possible to avoid the exponential dependence on $k$. | [] | Test |
43,686 | 16 | Title: TPS++: Attention-Enhanced Thin-Plate Spline for Scene Text Recognition
Abstract: Text irregularities pose significant challenges to scene text recognizers. Thin-Plate Spline (TPS)-based rectification is widely regarded as an effective means to deal with them. Currently, the calculation of TPS transformation parameters purely depends on the quality of regressed text borders. It ignores the text content and often leads to unsatisfactory rectified results for severely distorted text. In this work, we introduce TPS++, an attention-enhanced TPS transformation that incorporates the attention mechanism to text rectification for the first time. TPS++ formulates the parameter calculation as a joint process of foreground control point regression and content-based attention score estimation, which is computed by a dedicated designed gated-attention block. TPS++ builds a more flexible content-aware rectifier, generating a natural text correction that is easier to read by the subsequent recognizer. Moreover, TPS++ shares the feature backbone with the recognizer in part and implements the rectification at feature-level rather than image-level, incurring only a small overhead in terms of parameters and inference time. Experiments on public benchmarks show that TPS++ consistently improves the recognition and achieves state-of-the-art accuracy. Meanwhile, it generalizes well on different backbones and recognizers. Code is at https://github.com/simplify23/TPS_PP. | [
4025,
9254
] | Train |
43,687 | 24 | Title: Theoretically Principled Trade-off for Stateful Defenses against Query-Based Black-Box Attacks
Abstract: Adversarial examples threaten the integrity of machine learning systems with alarming success rates even under constrained black-box conditions. Stateful defenses have emerged as an effective countermeasure, detecting potential attacks by maintaining a buffer of recent queries and detecting new queries that are too similar. However, these defenses fundamentally pose a trade-off between attack detection and false positive rates, and this trade-off is typically optimized by hand-picking feature extractors and similarity thresholds that empirically work well. There is little current understanding as to the formal limits of this trade-off and the exact properties of the feature extractors/underlying problem domain that influence it. This work aims to address this gap by offering a theoretical characterization of the trade-off between detection and false positive rates for stateful defenses. We provide upper bounds for detection rates of a general class of feature extractors and analyze the impact of this trade-off on the convergence of black-box attacks. We then support our theoretical findings with empirical evaluations across multiple datasets and stateful defenses. | [] | Test |
43,688 | 16 | Title: Deep Semantic Graph Matching for Large-scale Outdoor Point Clouds Registration
Abstract: The current point cloud registration methods are mainly based on geometric information and usually ignore the semantic information in the point clouds. In this paper, we treat the point cloud registration problem as semantic instance matching and registration task, and propose a deep semantic graph matching method for large-scale outdoor point cloud registration. Firstly, the semantic category labels of 3D point clouds are obtained by utilizing large-scale point cloud semantic segmentation network. The adjacent points with the same category labels are then clustered together by using Euclidean clustering algorithm to obtain the semantic instances. Secondly, the semantic adjacency graph is constructed based on the spatial adjacency relation of semantic instances. Three kinds of high-dimensional features including geometric shape features, semantic categorical features and spatial distribution features are learned through graph convolutional network, and enhanced based on attention mechanism. Thirdly, the semantic instance matching problem is modeled as an optimal transport problem, and solved through an optimal matching layer. Finally, according to the matched semantic instances, the geometric transformation matrix between two point clouds is first obtained by SVD algorithm and then refined by ICP algorithm. The experiments are cconducted on the KITTI Odometry dataset, and the average relative translation error and average relative rotation error of the proposed method are 6.6cm and 0.229{\deg} respectively. | [] | Test |
43,689 | 30 | Title: OYXOY: A Modern NLP Test Suite for Modern Greek
Abstract: This paper serves as a foundational step towards the development of a linguistically motivated and technically relevant evaluation suite for Greek NLP. We initiate this endeavor by introducing four expert-verified evaluation tasks, specifically targeted at natural language inference, word sense disambiguation (through example comparison or sense selection) and metaphor detection. More than language-adapted replicas of existing tasks, we contribute two innovations which will resonate with the broader resource and evaluation community. Firstly, our inference dataset is the first of its kind, marking not just \textit{one}, but rather \textit{all} possible inference labels, accounting for possible shifts due to e.g. ambiguity or polysemy. Secondly, we demonstrate a cost-efficient method to obtain datasets for under-resourced languages. Using ChatGPT as a language-neutral parser, we transform the Dictionary of Standard Modern Greek into a structured format, from which we derive the other three tasks through simple projections. Alongside each task, we conduct experiments using currently available state of the art machinery. Our experimental baselines affirm the challenging nature of our tasks and highlight the need for expedited progress in order for the Greek NLP ecosystem to keep pace with contemporary mainstream research. | [
2376,
34140,
39254
] | Train |
43,690 | 24 | Title: Evaluating Explanation Methods for Multivariate Time Series Classification
Abstract: Multivariate time series classification is an important computational task arising in applications where data is recorded over time and over multiple channels. For example, a smartwatch can record the acceleration and orientation of a person's motion, and these signals are recorded as multivariate time series. We can classify this data to understand and predict human movement and various properties such as fitness levels. In many applications classification alone is not enough, we often need to classify but also understand what the model learns (e.g., why was a prediction given, based on what information in the data). The main focus of this paper is on analysing and evaluating explanation methods tailored to Multivariate Time Series Classification (MTSC). We focus on saliency-based explanation methods that can point out the most relevant channels and time series points for the classification decision. We analyse two popular and accurate multivariate time series classifiers, ROCKET and dResNet, as well as two popular explanation methods, SHAP and dCAM. We study these methods on 3 synthetic datasets and 2 real-world datasets and provide a quantitative and qualitative analysis of the explanations provided. We find that flattening the multivariate datasets by concatenating the channels works as well as using multivariate classifiers directly and adaptations of SHAP for MTSC work quite well. Additionally, we also find that the popular synthetic datasets we used are not suitable for time series analysis. | [
37584
] | Train |
43,691 | 16 | Title: Beyond One-to-One: Rethinking the Referring Image Segmentation
Abstract: Referring image segmentation aims to segment the target object referred by a natural language expression. However, previous methods rely on the strong assumption that one sentence must describe one target in the image, which is often not the case in real-world applications. As a result, such methods fail when the expressions refer to either no objects or multiple objects. In this paper, we address this issue from two perspectives. First, we propose a Dual Multi-Modal Interaction (DMMI) Network, which contains two decoder branches and enables information flow in two directions. In the text-to-image decoder, text embedding is utilized to query the visual feature and localize the corresponding target. Meanwhile, the image-to-text decoder is implemented to reconstruct the erased entity-phrase conditioned on the visual feature. In this way, visual features are encouraged to contain the critical semantic information about target entity, which supports the accurate segmentation in the text-to-image decoder in turn. Secondly, we collect a new challenging but realistic dataset called Ref-ZOM, which includes image-text pairs under different settings. Extensive experiments demonstrate our method achieves state-of-the-art performance on different datasets, and the Ref-ZOM-trained model performs well on various types of text inputs. Codes and datasets are available at https://github.com/toggle1995/RIS-DMMI. | [] | Train |
43,692 | 16 | Title: Leaf Only SAM: A Segment Anything Pipeline for Zero-Shot Automated Leaf Segmentation
Abstract: Segment Anything Model (SAM) is a new foundation model that can be used as a zero-shot object segmentation method with the use of either guide prompts such as bounding boxes, polygons, or points. Alternatively, additional post processing steps can be used to identify objects of interest after segmenting everything in an image. Here we present a method using segment anything together with a series of post processing steps to segment potato leaves, called Leaf Only SAM. The advantage of this proposed method is that it does not require any training data to produce its results so has many applications across the field of plant phenotyping where there is limited high quality annotated data available. We compare the performance of Leaf Only SAM to a Mask R-CNN model which has been fine-tuned on our small novel potato leaf dataset. On the evaluation dataset, Leaf Only SAM finds an average recall of 63.2 and an average precision of 60.3, compared to recall of 78.7 and precision of 74.7 for Mask R-CNN. Leaf Only SAM does not perform better than the fine-tuned Mask R-CNN model on our data, but the SAM based model does not require any extra training or annotation of our new dataset. This shows there is potential to use SAM as a zero-shot classifier with the addition of post processing steps. | [
12704,
28718,
41008,
29270,
18360,
35263
] | Validation |
43,693 | 27 | Title: One-shot Imitation Learning via Interaction Warping
Abstract: Imitation learning of robot policies from few demonstrations is crucial in open-ended applications. We propose a new method, Interaction Warping, for learning SE(3) robotic manipulation policies from a single demonstration. We infer the 3D mesh of each object in the environment using shape warping, a technique for aligning point clouds across object instances. Then, we represent manipulation actions as keypoints on objects, which can be warped with the shape of the object. We show successful one-shot imitation learning on three simulated and real-world object re-arrangement tasks. We also demonstrate the ability of our method to predict object meshes and robot grasps in the wild. | [
35541,
28142
] | Test |
43,694 | 31 | Title: Adaptive Vague Preference Policy Learning for Multi-round Conversational Recommendation
Abstract: Conversational recommendation systems (CRS) effectively address information asymmetry by dynamically eliciting user preferences through multi-turn interactions. Existing CRS widely assumes that users have clear preferences. Under this assumption, the agent will completely trust the user feedback and treat the accepted or rejected signals as strong indicators to filter items and reduce the candidate space, which may lead to the problem of over-filtering. However, in reality, users' preferences are often vague and volatile, with uncertainty about their desires and changing decisions during interactions. To address this issue, we introduce a novel scenario called Vague Preference Multi-round Conversational Recommendation (VPMCR), which considers users' vague and volatile preferences in CRS.VPMCR employs a soft estimation mechanism to assign a non-zero confidence score for all candidate items to be displayed, naturally avoiding the over-filtering problem. In the VPMCR setting, we introduce an solution called Adaptive Vague Preference Policy Learning (AVPPL), which consists of two main components: Uncertainty-aware Soft Estimation (USE) and Uncertainty-aware Policy Learning (UPL). USE estimates the uncertainty of users' vague feedback and captures their dynamic preferences using a choice-based preferences extraction module and a time-aware decaying strategy. UPL leverages the preference distribution estimated by USE to guide the conversation and adapt to changes in users' preferences to make recommendations or ask for attributes. Our extensive experiments demonstrate the effectiveness of our method in the VPMCR scenario, highlighting its potential for practical applications and improving the overall performance and applicability of CRS in real-world settings, particularly for users with vague or dynamic preferences. | [
25312,
4147,
1998
] | Train |
43,695 | 23 | Title: Quantum Software Analytics: Opportunities and Challenges
Abstract: Quantum computing systems depend on the principles of quantum mechanics to perform multiple challenging tasks more efficiently than their classical counterparts. In classical software engineering, the software life cycle is used to document and structure the processes of design, implementation, and maintenance of software applications. It helps stakeholders understand how to build an application. In this paper, we summarize a set of software analytics topics and techniques in the development life cycle that can be leveraged and integrated into quantum software application development. The results of this work can assist researchers and practitioners in better understanding the quantum-specific emerging development activities, challenges, and opportunities in the next generation of quantum software. | [] | Train |
43,696 | 16 | Title: Baybayin Character Instance Detection
Abstract: The Philippine Government recently passed the"National Writing System Act,"which promotes using Baybayin in Philippine texts. In support of this effort to promote the use of Baybayin, we present a computer vision system which can aid individuals who cannot easily read Baybayin script. In this paper, we survey the existing methods of identifying Baybayin scripts using computer vision and machine learning techniques and discuss their capabilities and limitations. Further, we propose a Baybayin Optical Character Instance Segmentation and Classification model using state-of-the-art Convolutional Neural Networks (CNNs) that detect Baybayin character instances in an image then outputs the Latin alphabet counterparts of each character instance in the image. Most existing systems are limited to character-level image classification and often misclassify or not natively support characters with diacritics. In addition, these existing models often have specific input requirements that limit it to classifying Baybayin text in a controlled setting, such as limitations in clarity and contrast, among others. To our knowledge, our proposed method is the first end-to-end character instance detection model for Baybayin, achieving a mAP50 score of 93.30%, mAP50-95 score of 80.50%, and F1-Score of 84.84%. | [] | Train |
43,697 | 24 | Title: Minimizing Trajectory Curvature of ODE-based Generative Models
Abstract: Recent ODE/SDE-based generative models, such as diffusion models, rectified flows, and flow matching, define a generative process as a time reversal of a fixed forward process. Even though these models show impressive performance on large-scale datasets, numerical simulation requires multiple evaluations of a neural network, leading to a slow sampling speed. We attribute the reason to the high curvature of the learned generative trajectories, as it is directly related to the truncation error of a numerical solver. Based on the relationship between the forward process and the curvature, here we present an efficient method of training the forward process to minimize the curvature of generative trajectories without any ODE/SDE simulation. Experiments show that our method achieves a lower curvature than previous models and, therefore, decreased sampling costs while maintaining competitive performance. Code is available at https://github.com/sangyun884/fast-ode. | [
43881,
12889,
11146,
36483
] | Train |
43,698 | 31 | Title: Automated Prompting for Non-overlapping Cross-domain Sequential Recommendation
Abstract: Cross-domain Recommendation (CR) has been extensively studied in recent years to alleviate the data sparsity issue in recommender systems by utilizing different domain information. In this work, we focus on the more general Non-overlapping Cross-domain Sequential Recommendation (NCSR) scenario. NCSR is challenging because there are no overlapped entities (e.g., users and items) between domains, and there is only users' implicit feedback and no content information. Previous CR methods cannot solve NCSR well, since (1) they either need extra content to align domains or need explicit domain alignment constraints to reduce the domain discrepancy from domain-invariant features, (2) they pay more attention to users' explicit feedback (i.e., users' rating data) and cannot well capture their sequential interaction patterns, (3) they usually do a single-target cross-domain recommendation task and seldom investigate the dual-target ones. Considering the above challenges, we propose Prompt Learning-based Cross-domain Recommender (PLCR), an automated prompting-based recommendation framework for the NCSR task. Specifically, to address the challenge (1), PLCR resorts to learning domain-invariant and domain-specific representations via its prompt learning component, where the domain alignment constraint is discarded. For challenges (2) and (3), PLCR introduces a pre-trained sequence encoder to learn users' sequential interaction patterns, and conducts a dual-learning target with a separation constraint to enhance recommendations in both domains. Our empirical study on two sub-collections of Amazon demonstrates the advance of PLCR compared with some related SOTA methods. | [
6008,
33800,
4610,
23375
] | Train |
43,699 | 27 | Title: Robust Pivoting Manipulation using Contact Implicit Bilevel Optimization
Abstract: Generalizable manipulation requires that robots be able to interact with novel objects and environment. This requirement makes manipulation extremely challenging as a robot has to reason about complex frictional interactions with uncertainty in physical properties of the object and the environment. In this paper, we study robust optimization for planning of pivoting manipulation in the presence of uncertainties. We present insights about how friction can be exploited to compensate for inaccuracies in the estimates of the physical properties during manipulation. Under certain assumptions, we derive analytical expressions for stability margin provided by friction during pivoting manipulation. This margin is then used in a Contact Implicit Bilevel Optimization (CIBO) framework to optimize a trajectory that maximizes this stability margin to provide robustness against uncertainty in several physical parameters of the object. We present analysis of the stability margin with respect to several parameters involved in the underlying bilevel optimization problem. We demonstrate our proposed method using a 6 DoF manipulator for manipulating several different objects. | [
31572,
44583
] | Test |
43,700 | 16 | Title: Generalizable Metric Network for Cross-domain Person Re-identification
Abstract: Person Re-identification (Re-ID) is a crucial technique for public security and has made significant progress in supervised settings. However, the cross-domain (i.e., domain generalization) scene presents a challenge in Re-ID tasks due to unseen test domains and domain-shift between the training and test sets. To tackle this challenge, most existing methods aim to learn domain-invariant or robust features for all domains. In this paper, we observe that the data-distribution gap between the training and test sets is smaller in the sample-pair space than in the sample-instance space. Based on this observation, we propose a Generalizable Metric Network (GMN) to further explore sample similarity in the sample-pair space. Specifically, we add a Metric Network (M-Net) after the main network and train it on positive and negative sample-pair features, which is then employed during the test stage. Additionally, we introduce the Dropout-based Perturbation (DP) module to enhance the generalization capability of the metric network by enriching the sample-pair diversity. Moreover, we develop a Pair-Identity Center (PIC) loss to enhance the model's discrimination by ensuring that sample-pair features with the same pair-identity are consistent. We validate the effectiveness of our proposed method through a lot of experiments on multiple benchmark datasets and confirm the value of each module in our GMN. | [] | Train |
43,701 | 16 | Title: Robustness Analysis on Foundational Segmentation Models
Abstract: Due to the increase in computational resources and accessibility of data, an increase in large, deep learning models trained on copious amounts of data using self-supervised or semi-supervised learning have emerged. These"foundation"models are often adapted to a variety of downstream tasks like classification, object detection, and segmentation with little-to-no training on the target dataset. In this work, we perform a robustness analysis of Visual Foundation Models (VFMs) for segmentation tasks and compare them to supervised models of smaller scale. We focus on robustness against real-world distribution shift perturbations.We benchmark four state-of-the-art segmentation architectures using 2 different datasets, COCO and ADE20K, with 17 different perturbations with 5 severity levels each. We find interesting insights that include (1) VFMs are not robust to compression-based corruptions, (2) while the selected VFMs do not significantly outperform or exhibit more robustness compared to non-VFM models, they remain competitively robust in zero-shot evaluations, particularly when non-VFM are under supervision and (3) selected VFMs demonstrate greater resilience to specific categories of objects, likely due to their open-vocabulary training paradigm, a feature that non-VFM models typically lack. We posit that the suggested robustness evaluation introduces new requirements for foundational models, thus sparking further research to enhance their performance. | [
37254,
42599,
6380,
37133,
14168
] | Train |
43,702 | 24 | Title: Martian time-series unraveled: A multi-scale nested approach with factorial variational autoencoders
Abstract: Unsupervised source separation involves unraveling an unknown set of source signals recorded through a mixing operator, with limited prior knowledge about the sources, and only access to a dataset of signal mixtures. This problem is inherently ill-posed and is further challenged by the variety of time-scales exhibited by sources in time series data. Existing methods typically rely on a preselected window size that limits their capacity to handle multi-scale sources. To address this issue, instead of operating in the time domain, we propose an unsupervised multi-scale clustering and source separation framework by leveraging wavelet scattering covariances that provide a low-dimensional representation of stochastic processes, capable of distinguishing between different non-Gaussian stochastic processes. Nested within this representation space, we develop a factorial Gaussian-mixture variational autoencoder that is trained to (1) probabilistically cluster sources at different time-scales and (2) independently sample scattering covariance representations associated with each cluster. Using samples from each cluster as prior information, we formulate source separation as an optimization problem in the wavelet scattering covariance representation space, resulting in separated sources in the time domain. When applied to seismic data recorded during the NASA InSight mission on Mars, our multi-scale nested approach proves to be a powerful tool for discriminating between sources varying greatly in time-scale, e.g., minute-long transient one-sided pulses (known as ``glitches'') and structured ambient noises resulting from atmospheric activities that typically last for tens of minutes. These results provide an opportunity to conduct further investigations into the isolated sources related to atmospheric-surface interactions, thermal relaxations, and other complex phenomena. | [
14440
] | Test |
43,703 | 27 | Title: Learning from Local Experience: Informed Sampling Distributions for High Dimensional Motion Planning
Abstract: This paper presents a sampling-based motion planning framework that leverages the geometry of obstacles in a workspace as well as prior experiences from motion planning problems. Previous studies have demonstrated the benefits of utilizing prior solutions to motion planning problems for improving planning efficiency. However, particularly for high-dimensional systems, achieving high performance across randomized environments remains a technical challenge for experience-based approaches due to the substantial variance between each query. To address this challenge, we propose a novel approach that involves decoupling the problem into subproblems through algorithmic workspace decomposition and graph search. Additionally, we capitalize on prior experience within each subproblem. This approach effectively reduces the variance across different problems, leading to improved performance for experience-based planners. To validate the effectiveness of our framework, we conduct experiments using 2D and 6D robotic systems. The experimental results demonstrate that our framework outperforms existing algorithms in terms of planning time and cost. | [] | Test |
43,704 | 16 | Title: HUMS2023 Data Challenge Result Submission
Abstract: We implemented a simple method for early detection in this research. The implemented methods are plotting the given mat files and analyzing scalogram images generated by performing Continuous Wavelet Transform (CWT) on the samples. Also, finding the mean, standard deviation (STD), and peak-to-peak (P2P) values from each signal also helped detect faulty signs. We have implemented the autoregressive integrated moving average (ARIMA) method to track the progression. | [] | Train |
43,705 | 30 | Title: Watermarking Text Generated by Black-Box Language Models
Abstract: LLMs now exhibit human-like skills in various fields, leading to worries about misuse. Thus, detecting generated text is crucial. However, passive detection methods are stuck in domain specificity and limited adversarial robustness. To achieve reliable detection, a watermark-based method was proposed for white-box LLMs, allowing them to embed watermarks during text generation. The method involves randomly dividing the model vocabulary to obtain a special list and adjusting the probability distribution to promote the selection of words in the list. A detection algorithm aware of the list can identify the watermarked text. However, this method is not applicable in many real-world scenarios where only black-box language models are available. For instance, third-parties that develop API-based vertical applications cannot watermark text themselves because API providers only supply generated text and withhold probability distributions to shield their commercial interests. To allow third-parties to autonomously inject watermarks into generated text, we develop a watermarking framework for black-box language model usage scenarios. Specifically, we first define a binary encoding function to compute a random binary encoding corresponding to a word. The encodings computed for non-watermarked text conform to a Bernoulli distribution, wherein the probability of a word representing bit-1 being approximately 0.5. To inject a watermark, we alter the distribution by selectively replacing words representing bit-0 with context-based synonyms that represent bit-1. A statistical test is then used to identify the watermark. Experiments demonstrate the effectiveness of our method on both Chinese and English datasets. Furthermore, results under re-translation, polishing, word deletion, and synonym substitution attacks reveal that it is arduous to remove the watermark without compromising the original semantics. | [
6531,
35044,
10723,
13700,
22476,
38235,
43930,
315
] | Test |
43,706 | 16 | Title: A Specific Task-oriented Semantic Image Communication System for substation patrol inspection
Abstract: Intelligent inspection robots are widely used in substation patrol inspection, which can help check potential safety hazards by patrolling the substation and sending back scene images. However, when patrolling some marginal areas with weak signal, the scene images cannot be sucessfully transmissted to be used for hidden danger elimination, which greatly reduces the quality of robots'daily work. To solve such problem, a Specific Task-oriented Semantic Communication System for Imag-STSCI is designed, which involves the semantic features extraction, transmission, restoration and enhancement to get clearer images sent by intelligent robots under weak signals. Inspired by that only some specific details of the image are needed in such substation patrol inspection task, we proposed a new paradigm of semantic enhancement in such specific task to ensure the clarity of key semantic information when facing a lower bit rate or a low signal-to-noise ratio situation. Across the reality-based simulation, experiments show our STSCI can generally surpass traditional image-compression-based and channel-codingbased or other semantic communication system in the substation patrol inspection task with a lower bit rate even under a low signal-to-noise ratio situation. | [] | Train |
43,707 | 30 | Title: Probabilistic Linguistic Knowledge and Token-level Text Augmentation
Abstract: This paper investigates the effectiveness of token-level text augmentation and the role of probabilistic linguistic knowledge within a linguistically-motivated evaluation context. Two text augmentation programs, REDA and REDA$_{NG}$, were developed, both implementing five token-level text editing operations: Synonym Replacement (SR), Random Swap (RS), Random Insertion (RI), Random Deletion (RD), and Random Mix (RM). REDA$_{NG}$ leverages pretrained $n$-gram language models to select the most likely augmented texts from REDA's output. Comprehensive and fine-grained experiments were conducted on a binary question matching classification task in both Chinese and English. The results strongly refute the general effectiveness of the five token-level text augmentation techniques under investigation, whether applied together or separately, and irrespective of various common classification model types used, including transformers. Furthermore, the role of probabilistic linguistic knowledge is found to be minimal. | [] | Validation |
43,708 | 24 | Title: Tune As You Scale: Hyperparameter Optimization For Compute Efficient Training
Abstract: Hyperparameter tuning of deep learning models can lead to order-of-magnitude performance gains for the same amount of compute. Despite this, systematic tuning is uncommon, particularly for large models, which are expensive to evaluate and tend to have many hyperparameters, necessitating difficult judgment calls about tradeoffs, budgets, and search bounds. To address these issues and propose a practical method for robustly tuning large models, we present Cost-Aware Pareto Region Bayesian Search (CARBS), a Bayesian optimization algorithm that performs local search around the performance-cost Pareto frontier. CARBS does well even in unbounded search spaces with many hyperparameters, learns scaling relationships so that it can tune models even as they are scaled up, and automates much of the"black magic"of tuning. Among our results, we effectively solve the entire ProcGen benchmark just by tuning a simple baseline (PPO, as provided in the original ProcGen paper). We also reproduce the model size vs. training tokens scaling result from the Chinchilla project (Hoffmann et al. 2022), while simultaneously discovering scaling laws for every other hyperparameter, via an easy automated process that uses significantly less compute and is applicable to any deep learning problem (not just language models). | [] | Train |
43,709 | 13 | Title: Neuromorphic Online Learning for Spatiotemporal Patterns with a Forward-only Timeline
Abstract: Spiking neural networks (SNNs) are bio-plausible computing models with high energy efficiency. The temporal dynamics of neurons and synapses enable them to detect temporal patterns and generate sequences. While Backpropagation Through Time (BPTT) is traditionally used to train SNNs, it is not suitable for online learning of embedded applications due to its high computation and memory cost as well as extended latency. Previous works have proposed online learning algorithms, but they often utilize highly simplified spiking neuron models without synaptic dynamics and reset feedback, resulting in subpar performance. In this work, we present Spatiotemporal Online Learning for Synaptic Adaptation (SOLSA), specifically designed for online learning of SNNs composed of Leaky Integrate and Fire (LIF) neurons with exponentially decayed synapses and soft reset. The algorithm not only learns the synaptic weight but also adapts the temporal filters associated to the synapses. Compared to the BPTT algorithm, SOLSA has much lower memory requirement and achieves a more balanced temporal workload distribution. Moreover, SOLSA incorporates enhancement techniques such as scheduled weight update, early stop training and adaptive synapse filter, which speed up the convergence and enhance the learning performance. When compared to other non-BPTT based SNN learning, SOLSA demonstrates an average learning accuracy improvement of 14.2%. Furthermore, compared to BPTT, SOLSA achieves a 5% higher average learning accuracy with a 72% reduction in memory cost. | [] | Train |
43,710 | 27 | Title: Imitating Task and Motion Planning with Visuomotor Transformers
Abstract: Imitation learning is a powerful tool for training robot manipulation policies, allowing them to learn from expert demonstrations without manual programming or trial-and-error. However, common methods of data collection, such as human supervision, scale poorly, as they are time-consuming and labor-intensive. In contrast, Task and Motion Planning (TAMP) can autonomously generate large-scale datasets of diverse demonstrations. In this work, we show that the combination of large-scale datasets generated by TAMP supervisors and flexible Transformer models to fit them is a powerful paradigm for robot manipulation. To that end, we present a novel imitation learning system called OPTIMUS that trains large-scale visuomotor Transformer policies by imitating a TAMP agent. OPTIMUS introduces a pipeline for generating TAMP data that is specifically curated for imitation learning and can be used to train performant transformer-based policies. In this paper, we present a thorough study of the design decisions required to imitate TAMP and demonstrate that OPTIMUS can solve a wide variety of challenging vision-based manipulation tasks with over 70 different objects, ranging from long-horizon pick-and-place tasks, to shelf and articulated object manipulation, achieving 70 to 80% success rates. Video results at https://mihdalal.github.io/optimus/ | [
16074,
8994,
2543
] | Validation |
43,711 | 16 | Title: High-Resolution GAN Inversion for Degraded Images in Large Diverse Datasets
Abstract: The last decades are marked by massive and diverse image data, which shows increasingly high resolution and quality. However, some images we obtained may be corrupted, affecting the perception and the application of downstream tasks. A generic method for generating a high-quality image from the degraded one is in demand. In this paper, we present a novel GAN inversion framework that utilizes the powerful generative ability of StyleGAN-XL for this problem. To ease the inversion challenge with StyleGAN-XL, Clustering \& Regularize Inversion (CRI) is proposed. Specifically, the latent space is firstly divided into finer-grained sub-spaces by clustering. Instead of initializing the inversion with the average latent vector, we approximate a centroid latent vector from the clusters, which generates an image close to the input image. Then, an offset with a regularization term is introduced to keep the inverted latent vector within a certain range. We validate our CRI scheme on multiple restoration tasks (i.e., inpainting, colorization, and super-resolution) of complex natural images, and show preferable quantitative and qualitative results. We further demonstrate our technique is robust in terms of data and different GAN models. To our best knowledge, we are the first to adopt StyleGAN-XL for generating high-quality natural images from diverse degraded inputs. Code is available at https://github.com/Booooooooooo/CRI. | [] | Validation |
43,712 | 5 | Title: Cloud-Native Computing: A Survey from the Perspective of Services
Abstract: The development of cloud computing delivery models inspires the emergence of cloud-native computing. Cloud-native computing, as the most influential development principle for web applications, has already attracted increasingly more attention in both industry and academia. Despite the momentum in the cloud-native industrial community, a clear research roadmap on this topic is still missing. As a contribution to this knowledge, this paper surveys key issues during the life-cycle of cloud-native applications, from the perspective of services. Specifically, we elaborate the research domains by decoupling the life-cycle of cloud-native applications into four states: building, orchestration, operate, and maintenance. We also discuss the fundamental necessities and summarize the key performance metrics that play critical roles during the development and management of cloud-native applications. We highlight the key implications and limitations of existing works in each state. The challenges, future directions, and research opportunities are also discussed. | [] | Train |
43,713 | 16 | Title: Artificial intelligence to advance Earth observation: a perspective
Abstract: Earth observation (EO) is a prime instrument for monitoring land and ocean processes, studying the dynamics at work, and taking the pulse of our planet. This article gives a bird's eye view of the essential scientific tools and approaches informing and supporting the transition from raw EO data to usable EO-based information. The promises, as well as the current challenges of these developments, are highlighted under dedicated sections. Specifically, we cover the impact of (i) Computer vision; (ii) Machine learning; (iii) Advanced processing and computing; (iv) Knowledge-based AI; (v) Explainable AI and causal inference; (vi) Physics-aware models; (vii) User-centric approaches; and (viii) the much-needed discussion of ethical and societal issues related to the massive use of ML technologies in EO. | [] | Validation |
43,714 | 4 | Title: TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
Abstract: With tools like GitHub Copilot, automatic code suggestion is no longer a dream in software engineering. These tools, based on large language models, are typically trained on massive corpora of code mined from unvetted public sources. As a result, these models are susceptible to data poisoning attacks where an adversary manipulates the model's training or fine-tuning phases by injecting malicious data. Poisoning attacks could be designed to influence the model's suggestions at run time for chosen contexts, such as inducing the model into suggesting insecure code payloads. To achieve this, prior poisoning attacks explicitly inject the insecure code payload into the training data, making the poisoning data detectable by static analysis tools that can remove such malicious data from the training set. In this work, we demonstrate two novel data poisoning attacks, COVERT and TROJANPUZZLE, that can bypass static analysis by planting malicious poisoning data in out-of-context regions such as docstrings. Our most novel attack, TROJANPUZZLE, goes one step further in generating less suspicious poisoning data by never including certain (suspicious) parts of the payload in the poisoned data, while still inducing a model that suggests the entire payload when completing code (i.e., outside docstrings). This makes TROJANPUZZLE robust against signature-based dataset-cleansing methods that identify and filter out suspicious sequences from the training data. Our evaluation against two model sizes demonstrates that both COVERT and TROJANPUZZLE have significant implications for how practitioners should select code used to train or tune code-suggestion models. | [
25505,
18306,
28580,
31643,
21068,
1115
] | Train |
43,715 | 24 | Title: Sound Explanation for Trustworthy Machine Learning
Abstract: We take a formal approach to the explainability problem of machine learning systems. We argue against the practice of interpreting black-box models via attributing scores to input components due to inherently conflicting goals of attribution-based interpretation. We prove that no attribution algorithm satisfies specificity, additivity, completeness, and baseline invariance. We then formalize the concept, sound explanation, that has been informally adopted in prior work. A sound explanation entails providing sufficient information to causally explain the predictions made by a system. Finally, we present the application of feature selection as a sound explanation for cancer prediction models to cultivate trust among clinicians. | [] | Train |
43,716 | 3 | Title: Streamlining personal data access requests: From obstructive procedures to automated web workflows
Abstract: Transparency and data portability are two core principles of modern privacy legislations such as the GDPR. From the regulatory perspective, providing individuals (data subjects) with access to their data is a main building block for implementing these. Different from other privacy principles and respective regulatory provisions, however, this right to data access has so far only seen marginal technical reflection. Processes related to performing data subject access requests (DSARs) are thus still to be executed manually, hindering the concept of data access from unfolding its full potential. To tackle this problem, we present an automated approach to the execution of DSARs, employing modern techniques of web automation. In particular, we propose a generic DSAR workflow model, a corresponding formal language for representing the particular workflows of different service providers (controllers), a publicly accessible and extendable workflow repository, and a browser-based execution engine, altogether providing ``one-click'' DSARs. To validate our approach and technical concepts, we examine, formalize and make publicly available the DSAR workflows of 15 widely used service providers and implement the execution engine in a publicly available browser extension. Altogether, we thereby pave the way for automated data subject access requests and lay the groundwork for a broad variety of subsequent technical means helping web users to better understand their privacy-related exposure to different service providers. | [
42283
] | Test |
43,717 | 24 | Title: Fed-ZERO: Efficient Zero-shot Personalization with Federated Mixture of Experts
Abstract: One of the goals in Federated Learning (FL) is to create personalized models that can adapt to the context of each participating client, while utilizing knowledge from a shared global model. Yet, often, personalization requires a fine-tuning step using clients' labeled data in order to achieve good performance. This may not be feasible in scenarios where incoming clients are fresh and/or have privacy concerns. It, then, remains open how one can achieve zero-shot personalization in these scenarios. We propose a novel solution by using a Mixture-of-Experts (MoE) framework within a FL setup. Our method leverages the diversity of the clients to train specialized experts on different subsets of classes, and a gating function to route the input to the most relevant expert(s). Our gating function harnesses the knowledge of a pretrained model common expert to enhance its routing decisions on-the-fly. As a highlight, our approach can improve accuracy up to 18\% in state of the art FL settings, while maintaining competitive zero-shot performance. In practice, our method can handle non-homogeneous data distributions, scale more efficiently, and improve the state-of-the-art performance on common FL benchmarks. | [] | Train |
43,718 | 10 | Title: Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via Prompt Augmented by ChatGPT
Abstract: In this paper, we aim to develop a large language model (LLM) with the reasoning ability on complex graph data. Currently, LLMs have achieved very impressive performance on various natural language learning tasks, extensions of which have also been applied to study the vision tasks with multi-modal data. However, when it comes to the graph learning tasks, existing LLMs present very serious flaws due to their several inherited weaknesses in performing {multi-step logic reasoning}, {precise mathematical calculation} and {perception about the spatial and temporal factors}. To address such challenges, in this paper, we will investigate the principles, methodologies and algorithms to empower existing LLMs with graph reasoning ability, which will have tremendous impacts on the current research of both LLMs and graph learning. Inspired by the latest ChatGPT and Toolformer models, we propose the Graph-ToolFormer (Graph Reasoning oriented Toolformer) framework to teach LLMs themselves with prompts augmented by ChatGPT to use external graph reasoning API tools. Specifically, we will investigate to teach Graph-ToolFormer to handle various graph data reasoning tasks in this paper, including both (1) very basic graph data loading and graph property reasoning tasks, ranging from simple graph order and size to the graph diameter and periphery, and (2) more advanced reasoning tasks on real-world graph data, such as bibliographic networks, protein molecules, sequential recommender systems, social networks and knowledge graphs. | [
7936,
4610,
33220,
18117,
28294,
40103,
13700,
45131,
1292,
31375,
22288,
39823,
17299,
8084,
3352,
634,
35580,
23805
] | Validation |
43,719 | 24 | Title: Graph Mixer Networks
Abstract: In recent years, the attention mechanism has demonstrated superior performance in various tasks, leading to the emergence of GAT and Graph Transformer models that utilize this mechanism to extract relational information from graph-structured data. However, the high computational cost associated with the Transformer block, as seen in Vision Transformers, has motivated the development of alternative architectures such as MLP-Mixers, which have been shown to improve performance in image tasks while reducing the computational cost. Despite the effectiveness of Transformers in graph-based tasks, their computational efficiency remains a concern. The logic behind MLP-Mixers, which addresses this issue in image tasks, has the potential to be applied to graph-structured data as well. In this paper, we propose the Graph Mixer Network (GMN), also referred to as Graph Nasreddin Nets (GNasNets), a framework that incorporates the principles of MLP-Mixers for graph-structured data. Using a PNA model with multiple aggregators as the foundation, our proposed GMN has demonstrated improved performance compared to Graph Transformers. The source code is available publicly at https://github.com/asarigun/GraphMixerNetworks. | [] | Validation |
43,720 | 16 | Title: ADAPT: Action-aware Driving Caption Transformer
Abstract: End-to-end autonomous driving has great potential in the transportation industry. However, the lack of transparency and interpretability of the automatic decision-making process hinders its industrial adoption in practice. There have been some early attempts to use attention maps or cost volume for better model explainability which is difficult for ordinary passengers to understand. To bridge the gap, we propose an end-to-end transformer-based architecture, ADAPT (Action-aware Driving cAPtion Transformer), which provides user-friendly natural language narrations and reasoning for each decision making step of autonomous vehicular control and action. ADAPT jointly trains both the driving caption task and the vehicular control prediction task, through a shared video representation. Experiments on BDD-X (Berkeley DeepDrive eXplanation) dataset demonstrate state-of-the-art performance of the ADAPT framework on both automatic metrics and human evaluation. To illustrate the feasibility of the proposed framework in real-world applications, we build a novel deployable system that takes raw car videos as input and outputs the action narrations and reasoning in real time. The code, models and data are available at https://github.com/jxbbb/ADAPT. | [
21892,
27782,
37832,
21170,
32664
] | Train |
43,721 | 27 | Title: CMG-Net: An End-to-End Contact-based Multi-Finger Dexterous Grasping Network
Abstract: In this paper, we propose a novel representation for grasping using contacts between multi-finger robotic hands and objects to be manipulated. This representation significantly reduces the prediction dimensions and accelerates the learning process. We present an effective end-to-end network, CMG-Net, for grasping unknown objects in a cluttered environment by efficiently predicting multi-finger grasp poses and hand configurations from a single-shot point cloud. Moreover, we create a synthetic grasp dataset that consists of five thousand cluttered scenes, 80 object categories, and 20 million annotations. We perform a comprehensive empirical study and demonstrate the effectiveness of our grasping representation and CMG-Net. Our work significantly outperforms the state-of-the-art for three-finger robotic hands. We also demonstrate that the model trained using synthetic data perform very well for real robots. | [] | Train |
43,722 | 5 | Title: Flover: A Temporal Fusion Framework for Efficient Autoregressive Model Parallel Inference
Abstract: In the rapidly evolving field of deep learning, the performance of model inference has become a pivotal aspect as models become more complex and are deployed in diverse applications. Among these, autoregressive models stand out due to their state-of-the-art performance in numerous generative tasks. These models, by design, harness a temporal dependency structure, where the current token's probability distribution is conditioned on preceding tokens. This inherently sequential characteristic, however, adheres to the Markov Chain assumption and lacks temporal parallelism, which poses unique challenges. Particularly in industrial contexts where inference requests, following a Poisson time distribution, necessitate diverse response lengths, this absence of parallelism is more profound. Existing solutions, such as dynamic batching and concurrent model instances, nevertheless, come with severe overheads and a lack of flexibility, these coarse-grained methods fall short of achieving optimal latency and throughput. To address these shortcomings, we propose Flavor -- a temporal fusion framework for efficient inference in autoregressive models, eliminating the need for heuristic settings and applies to a wide range of inference scenarios. By providing more fine-grained parallelism on the temporality of requests and employing an efficient memory shuffle algorithm, Flover achieves up to 11x faster inference on GPT models compared to the cutting-edge solutions provided by NVIDIA Triton FasterTransformer. Crucially, by leveraging the advanced tensor parallel technique, Flover proves efficacious across diverse computational landscapes, from single-GPU setups to multi-node scenarios, thereby offering robust performance optimization that transcends hardware boundaries. | [
33220
] | Test |
43,723 | 16 | Title: Hierarchical Point-based Active Learning for Semi-supervised Point Cloud Semantic Segmentation
Abstract: Impressive performance on point cloud semantic segmentation has been achieved by fully-supervised methods with large amounts of labelled data. As it is labour-intensive to acquire large-scale point cloud data with point-wise labels, many attempts have been made to explore learning 3D point cloud segmentation with limited annotations. Active learning is one of the effective strategies to achieve this purpose but is still under-explored. The most recent methods of this kind measure the uncertainty of each pre-divided region for manual labelling but they suffer from redundant information and require additional efforts for region division. This paper aims at addressing this issue by developing a hierarchical point-based active learning strategy. Specifically, we measure the uncertainty for each point by a hierarchical minimum margin uncertainty module which considers the contextual information at multiple levels. Then, a feature-distance suppression strategy is designed to select important and representative points for manual labelling. Besides, to better exploit the unlabelled data, we build a semi-supervised segmentation framework based on our active strategy. Extensive experiments on the S3DIS and ScanNetV2 datasets demonstrate that the proposed framework achieves 96.5% and 100% performance of fully-supervised baseline with only 0.07% and 0.1% training data, respectively, outperforming the state-of-the-art weakly-supervised and active learning methods. The code will be available at https://github.com/SmiletoE/HPAL. | [] | Test |
43,724 | 2 | Title: Semantic Analysis of Subexponential Modalities in Distributive Non-commutative Linear Logic
Abstract: In this paper, we consider the full Lambek calculus enriched with subexponential modalities in a distributive setting. We show that the distributive Lambek calculus with subexponentials is complete with respect to its Kripke frames via canonical extensions. In this approach, we consider subexponentials as S4-like modalities and each modality is interpreted with a reflexive and transitive relation similarly to usual Kripke semantics. | [] | Validation |
43,725 | 7 | Title: CACTUS: a computational framework for generating realistic white matter microstructure substrates
Abstract: Monte-Carlo diffusion simulations are a powerful tool for validating tissue microstructure models by generating synthetic diffusion-weighted magnetic resonance images (DW-MRI) in controlled environments. This is fundamental for understanding the link between micrometre-scale tissue properties and DW-MRI signals measured at the millimetre-scale, optimizing acquisition protocols to target microstructure properties of interest, and exploring the robustness and accuracy of estimation methods. However, accurate simulations require substrates that reflect the main microstructural features of the studied tissue. To address this challenge, we introduce a novel computational workflow, CACTUS (Computational Axonal Configurator for Tailored and Ultradense Substrates), for generating synthetic white matter substrates. Our approach allows constructing substrates with higher packing density than existing methods, up to 95% intra-axonal volume fraction, and larger voxel sizes of up to 500μm3 with rich fibre complexity. CACTUS generates bundles with angular dispersion, bundle crossings, and variations along the fibres of their inner and outer radii and g-ratio. We achieve this by introducing a novel global cost function and a fibre radial growth approach that allows substrates to match predefined targeted characteristics and mirror those reported in histological studies. CACTUS improves the development of complex synthetic substrates, paving the way for future applications in microstructure imaging. | [] | Train |
43,726 | 24 | Title: Exploring a Gradient-based Explainable AI Technique for Time-Series Data: A Case Study of Assessing Stroke Rehabilitation Exercises
Abstract: Explainable artificial intelligence (AI) techniques are increasingly being explored to provide insights into why AI and machine learning (ML) models provide a certain outcome in various applications. However, there has been limited exploration of explainable AI techniques on time-series data, especially in the healthcare context. In this paper, we describe a threshold-based method that utilizes a weakly supervised model and a gradient-based explainable AI technique (i.e. saliency map) and explore its feasibility to identify salient frames of time-series data. Using the dataset from 15 post-stroke survivors performing three upper-limb exercises and labels on whether a compensatory motion is observed or not, we implemented a feed-forward neural network model and utilized gradients of each input on model outcomes to identify salient frames that involve compensatory motions. According to the evaluation using frame-level annotations, our approach achieved a recall of 0.96 and an F2-score of 0.91. Our results demonstrated the potential of a gradient-based explainable AI technique (e.g. saliency map) for time-series data, such as highlighting the frames of a video that therapists should focus on reviewing and reducing the efforts on frame-level labeling for model training. | [
34742
] | Train |
43,727 | 8 | Title: Interference-Aware Deployment for Maximizing User Satisfaction in Multi-UAV Wireless Networks
Abstract: In this letter, we study the deployment of Unmanned Aerial Vehicle mounted Base Stations (UAV-BSs) in multi-UAV cellular networks. We model the multi-UAV deployment problem as a user satisfaction maximization problem, that is, maximizing the proportion of served ground users (GUs) that meet a given minimum data rate requirement. We propose an interference-aware deployment (IAD) algorithm for serving arbitrarily distributed outdoor GUs. The proposed algorithm can alleviate the problem of overlapping coverage between adjacent UAV-BSs to minimize inter-cell interference. Therefore, reducing co-channel interference between UAV-BSs will improve user satisfaction and ensure that most GUs can achieve the minimum data rate requirement. Simulation results show that our proposed IAD outperforms comparative methods by more than 10% in user satisfaction in high-density environments. | [] | Test |
43,728 | 11 | Title: Multi-agent Coordination Under Temporal Logic Tasks and Team-Wise Intermittent Communication
Abstract: Multi-agent systems outperform single agent in complex collaborative tasks. However, in large-scale scenarios, ensuring timely information exchange during decentralized task execution remains a challenge. This work presents an online decentralized coordination scheme for multi-agent systems under complex local tasks and intermittent communication constraints. Unlike existing strategies that enforce all-time or intermittent connectivity, our approach allows agents to join or leave communication networks at aperiodic intervals, as deemed optimal by their online task execution. This scheme concurrently determines local plans and refines the communication strategy, i.e., where and when to communicate as a team. A decentralized potential game is modeled among agents, for which a Nash equilibrium is generated iteratively through online local search. It guarantees local task completion and intermittent communication constraints. Extensive numerical simulations are conducted against several strong baselines. | [] | Validation |
43,729 | 3 | Title: Quantitative dynamics of design thinking and creativity perspectives in company context
Abstract: nan | [] | Validation |
43,730 | 27 | Title: Contact Reduction with Bounded Stiffness for Robust Sim-to-Real Transfer of Robot Assembly
Abstract: In sim-to-real Reinforcement Learning (RL), a policy is trained in a simulated environment and then deployed on the physical system. The main challenge of sim-to-real RL is to overcome the reality gap - the discrepancies between the real world and its simulated counterpart. Using general geometric representations, such as convex decomposition, triangular mesh, signed distance field can improve simulation fidelity, and thus potentially narrow the reality gap. Common to these approaches is that many contact points are generated for geometrically-complex objects, which slows down simulation and may cause numerical instability. Contact reduction methods address these issues by limiting the number of contact points, but the validity of these methods for sim-to-real RL has not been confirmed. In this paper, we present a contact reduction method with bounded stiffness to improve the simulation accuracy. Our experiments show that the proposed method critically enables training RL policy for a tight-clearance double pin insertion task and successfully deploying the policy on a rigid, position-controlled physical robot. | [] | Train |
43,731 | 16 | Title: Exploring Data Geometry for Continual Learning
Abstract: Continual learning aims to efficiently learn from a non-stationary stream of data while avoiding forgetting the knowledge of old data. In many practical applications, data complies with non-Euclidean geometry. As such, the commonly used Euclidean space cannot gracefully capture non-Euclidean geometric structures of data, leading to in-ferior results. In this paper, we study continual learning from a novel perspective by exploring data geometry for the non-stationary stream of data. Our method dynamically expands the geometry of the underlying space to match growing geometric structures induced by new data, and pre-vents forgetting by keeping geometric structures of old data into account. In doing so, making use of the mixed cur-vature space, we propose an incremental search scheme, through which the growing geometric structures are en-coded. Then, we introduce an angular-regularization loss and a neighbor-robustness loss to train the model, capa-ble of penalizing the change of global geometric structures and local geometric structures. Experiments show that our method achieves better performance than baseline methods designed in Euclidean space. | [] | Train |
43,732 | 30 | Title: Toward Fairness in Text Generation via Mutual Information Minimization based on Importance Sampling
Abstract: Pretrained language models (PLMs), such as GPT2, have achieved remarkable empirical performance in text generation tasks. However, pretrained on large-scale natural language corpora, the generated text from PLMs may exhibit social bias against disadvantaged demographic groups. To improve the fairness of PLMs in text generation, we propose to minimize the mutual information between the semantics in the generated text sentences and their demographic polarity, i.e., the demographic group to which the sentence is referring. In this way, the mentioning of a demographic group (e.g., male or female) is encouraged to be independent from how it is described in the generated text, thus effectively alleviating the social bias. Moreover, we propose to efficiently estimate the upper bound of the above mutual information via importance sampling, leveraging a natural language corpus. We also propose a distillation mechanism that preserves the language modeling ability of the PLMs after debiasing. Empirical results on real-world benchmarks demonstrate that the proposed method yields superior performance in term of both fairness and language modeling ability. | [
7786,
4054
] | Train |
43,733 | 28 | Title: Perturbation-Resilient Sets for Dynamic Service Balancing
Abstract: Balanced and swap-robust minimal trades, introduced in [1], are important for studying the balance and stability of server access request protocols under data popularity changes. Constructions of such trades have so far relied on paired sets obtained through iterative combining of smaller sets that have provable stability guarantees, coupled with exhaustive computer search. Currently, there exists a nonnegligible gap between the resulting total dynamic balance discrepancy and the known theoretical lower bound. We present both new upper and lower bounds on the total service requests discrepancy under limited popularity changes. Our constructive near-optimal approach uses a new class of paired graphs whose vertices are two balanced sets with edges (arcs) that capture the balance and potential balance changes induced by limited-magnitude popularity changes (swaps). | [] | Train |
43,734 | 18 | Title: Safety of self-assembled neuromorphic hardware
Abstract: The scalability of modern computing hardware is limited by physical bottlenecks and high energy consumption. These limitations could be addressed by neuromorphic hardware (NMH) which is inspired by the human brain. NMH enables physically built-in capabilities of information processing at the hardware level. In other words, brain-like features bias hardware towards intelligence at scale. In Table 1 we compare computing devices by their ability to scale (scaling features) and adaptation of brain-inspired concepts (neuromorphic features). Neuropmorphic computing paradigms require a novel approach to safe, interpretable AI. In order to effectively engage the risk of misaligned AI, safety research may need to expand its scope to include NMH. This may be best achieved by supporting those currently engaged in NMH capability research to work on safety and related areas. | [] | Train |
43,735 | 27 | Title: How Simulation Helps Autonomous Driving: A Survey of Sim2real, Digital Twins, and Parallel Intelligence
Abstract: Safety and cost are two important concerns for the development of autonomous driving technologies. From the academic research to commercial applications of autonomous driving vehicles, sufficient simulation and real world testing are required. In general, a large scale of testing in simulation environment is conducted and then the learned driving knowledge is transferred to the real world, so how to adapt driving knowledge learned in simulation to reality becomes a critical issue. However, the virtual simulation world differs from the real world in many aspects such as lighting, textures, vehicle dynamics, and agents' behaviors, etc., which makes it difficult to bridge the gap between the virtual and real worlds. This gap is commonly referred to as the reality gap (RG). In recent years, researchers have explored various approaches to address the reality gap issue, which can be broadly classified into three categories: transferring knowledge from simulation to reality (sim2real), learning in digital twins (DTs), and learning by parallel intelligence (PI) technologies. In this paper, we consider the solutions through the sim2real, DTs, and PI technologies, and review important applications and innovations in the field of autonomous driving. Meanwhile, we show the state-of-the-arts from the views of algorithms, models, and simulators, and elaborate the development process from sim2real to DTs and PI. The presentation also illustrates the far-reaching effects and challenges in the development of sim2real, DTs, and PI in autonomous driving. | [
18544,
28396
] | Train |
43,736 | 30 | Title: SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images
Abstract: Visual question answering on document images that contain textual, visual, and layout information, called document VQA, has received much attention recently. Although many datasets have been proposed for developing document VQA systems, most of the existing datasets focus on understanding the content relationships within a single image and not across multiple images. In this study, we propose a new multi-image document VQA dataset, SlideVQA, containing 2.6k+ slide decks composed of 52k+ slide images and 14.5k questions about a slide deck. SlideVQA requires complex reasoning, including single-hop, multi-hop, and numerical reasoning, and also provides annotated arithmetic expressions of numerical answers for enhancing the ability of numerical reasoning. Moreover, we developed a new end-to-end document VQA model that treats evidence selection and question answering as a unified sequence-to-sequence format. Experiments on SlideVQA show that our model outperformed existing state-of-the-art QA models, but that it still has a large gap behind human performance. We believe that our dataset will facilitate research on document VQA. | [
19066,
26338
] | Train |
43,737 | 16 | Title: Score Priors Guided Deep Variational Inference for Unsupervised Real-World Single Image Denoising
Abstract: Real-world single image denoising is crucial and practical in computer vision. Bayesian inversions combined with score priors now have proven effective for single image denoising but are limited to white Gaussian noise. Moreover, applying existing score-based methods for real-world denoising requires not only the explicit train of score priors on the target domain but also the careful design of sampling procedures for posterior inference, which is complicated and impractical. To address these limitations, we propose a score priors-guided deep variational inference, namely ScoreDVI, for practical real-world denoising. By considering the deep variational image posterior with a Gaussian form, score priors are extracted based on easily accessible minimum MSE Non-$i.i.d$ Gaussian denoisers and variational samples, which in turn facilitate optimizing the variational image posterior. Such a procedure adaptively applies cheap score priors to denoising. Additionally, we exploit a Non-$i.i.d$ Gaussian mixture model and variational noise posterior to model the real-world noise. This scheme also enables the pixel-wise fusion of multiple image priors and variational image posteriors. Besides, we develop a noise-aware prior assignment strategy that dynamically adjusts the weight of image priors in the optimization. Our method outperforms other single image-based real-world denoising methods and achieves comparable performance to dataset-based unsupervised methods. | [] | Train |
43,738 | 24 | Title: A Survey on Recent Teacher-student Learning Studies
Abstract: Knowledge distillation is a method of transferring the knowledge from a complex deep neural network (DNN) to a smaller and faster DNN, while preserving its accuracy. Recent variants of knowledge distillation include teaching assistant distillation, curriculum distillation, mask distillation, and decoupling distillation, which aim to improve the performance of knowledge distillation by introducing additional components or by changing the learning process. Teaching assistant distillation involves an intermediate model called the teaching assistant, while curriculum distillation follows a curriculum similar to human education. Mask distillation focuses on transferring the attention mechanism learned by the teacher, and decoupling distillation decouples the distillation loss from the task loss. Overall, these variants of knowledge distillation have shown promising results in improving the performance of knowledge distillation. | [
35947,
9251,
8940
] | Train |
43,739 | 34 | Title: Optimal mixing of the down-up walk on independent sets of a given size
Abstract: Let $G$ be a graph on $n$ vertices of maximum degree $\Delta$. We show that, for any $\delta>0$, the down-up walk on independent sets of size $k \leq (1-\delta)\alpha_c(\Delta)n$ mixes in time $O_{\Delta,\delta}(k\log{n})$, thereby resolving a conjecture of Davies and Perkins in an optimal form. Here, $\alpha_{c}(\Delta)n$ is the NP-hardness threshold for the problem of counting independent sets of a given size in a graph on $n$ vertices of maximum degree $\Delta$. Our mixing time has optimal dependence on $k,n$ for the entire range of $k$; previously, even polynomial mixing was not known. In fact, for $k = \Omega_{\Delta}(n)$ in this range, we establish a log-Sobolev inequality with optimal constant $\Omega_{\Delta,\delta}(1/n)$. At the heart of our proof are three new ingredients, which may be of independent interest. The first is a method for lifting $\ell_\infty$-independence from a suitable distribution on the discrete cube -- in this case, the hard-core model -- to the slice by proving stability of an Edgeworth expansion using a multivariate zero-free region for the base distribution. The second is a generalization of the Lee-Yau induction to prove log-Sobolev inequalities for distributions on the slice with considerably less symmetry than the uniform distribution. The third is a sharp decomposition-type result which provides a lossless comparison between the Dirichlet form of the original Markov chain and that of the so-called projected chain in the presence of a contractive coupling. | [] | Train |
43,740 | 16 | Title: Towards Diverse Temporal Grounding under Single Positive Labels
Abstract: Temporal grounding aims to retrieve moments of the described event within an untrimmed video by a language query. Typically, existing methods assume annotations are precise and unique, yet one query may describe multiple moments in many cases. Hence, simply taking it as a one-vs-one mapping task and striving to match single-label annotations will inevitably introduce false negatives during optimization. In this study, we reformulate this task as a one-vs-many optimization problem under the condition of single positive labels. The unlabeled moments are considered unobserved rather than negative, and we explore mining potential positive moments to assist in multiple moment retrieval. In this setting, we propose a novel Diverse Temporal Grounding framework, termed DTG-SPL, which mainly consists of a positive moment estimation (PME) module and a diverse moment regression (DMR) module. PME leverages semantic reconstruction information and an expected positive regularization to uncover potential positive moments in an online fashion. Under the supervision of these pseudo positives, DMR is able to localize diverse moments in parallel that meet different users. The entire framework allows for end-to-end optimization as well as fast inference. Extensive experiments on Charades-STA and ActivityNet Captions show that our method achieves superior performance in terms of both single-label and multi-label metrics. | [] | Test |
43,741 | 30 | Title: Improving Non-autoregressive Translation Quality with Pretrained Language Model, Embedding Distillation and Upsampling Strategy for CTC
Abstract: Non-autoregressive approaches aim to improve the inference speed of translation models, particularly those that generate output in a one-pass forward manner. However, these approaches often suffer from a significant drop in translation quality compared to autoregressive models. This paper introduces a series of innovative techniques to enhance the translation quality of Non-Autoregressive Translation (NAT) models while maintaining a substantial acceleration in inference speed. We propose fine-tuning Pretrained Multilingual Language Models (PMLMs) with the CTC loss to train NAT models effectively. Furthermore, we adopt the MASK insertion scheme for up-sampling instead of token duplication, and we present an embedding distillation method to further enhance performance. In our experiments, our model outperforms the baseline autoregressive model (Transformer \textit{base}) on multiple datasets, including WMT'14 DE$\leftrightarrow$EN, WMT'16 RO$\leftrightarrow$EN, and IWSLT'14 DE$\leftrightarrow$EN. Notably, our model achieves better performance than the baseline autoregressive model on the IWSLT'14 En$\leftrightarrow$De and WMT'16 En$\leftrightarrow$Ro datasets, even without using distillation data during training. It is worth highlighting that on the IWSLT'14 DE$\rightarrow$EN dataset, our model achieves an impressive BLEU score of 39.59, setting a new state-of-the-art performance. Additionally, our model exhibits a remarkable speed improvement of 16.35 times compared to the autoregressive model. | [] | Train |
43,742 | 16 | Title: An Improved Encoder-Decoder Framework for Food Energy Estimation
Abstract: Dietary assessment is essential to maintaining a healthy lifestyle. Automatic image-based dietary assessment is a growing field of research due to the increasing prevalence of image capturing devices (e.g. mobile phones). In this work, we estimate food energy from a single monocular image, a difficult task due to the limited hard-to-extract amount of energy information present in an image. To do so, we employ an improved encoder-decoder framework for energy estimation; the encoder transforms the image into a representation embedded with food energy information in an easier-to-extract format, which the decoder then extracts the energy information from. To implement our method, we compile a high-quality food image dataset verified by registered dietitians containing eating scene images, food-item segmentation masks, and ground truth calorie values. Our method improves upon previous caloric estimation methods by over 10\% and 30 kCal in terms of MAPE and MAE respectively. | [
6272,
14129,
13384,
34739
] | Test |
43,743 | 28 | Title: Minimum-Latency Scheduling For Partial-Information Multiple Access Schemes
Abstract: Partial-information multiple access (PIMA) is an orthogonal multiple access (OMA) uplink scheme where time is divided into frames, each composed of two parts. The first part is used to count the number of users with packets to transmit, while the second has a variable number of allocated slots, each assigned to multiple users to uplink data transmission. We investigate the case of correlated user activations, wherein the correlation is due to the retransmissions of the collided packets, modeling PIMA as a partially observable-Markov decision process. The assignment of users to slots is optimized based on the knowledge of both the number of active users and past successful transmissions and collisions. The scheduling turns out to be a mixed integer nonlinear programming problem, with a complexity exponentially growing with the number of users. Thus, sub-optimal greedy solutions are proposed and evaluated. Our solutions show substantial performance improvements with respect to both traditional OMA schemes and conventional PIMA. | [
15986
] | Train |
43,744 | 24 | Title: A Bayesian approach to quantifying uncertainties and improving generalizability in traffic prediction models
Abstract: Deep-learning models for traffic data prediction can have superior performance in modeling complex functions using a multi-layer architecture. However, a major drawback of these approaches is that most of these approaches do not offer forecasts with uncertainty estimates, which are essential for traffic operations and control. Without uncertainty estimates, it is difficult to place any level of trust to the model predictions, and operational strategies relying on overconfident predictions can lead to worsening traffic conditions. In this study, we propose a Bayesian recurrent neural network framework for uncertainty quantification in traffic prediction with higher generalizability by introducing spectral normalization to its hidden layers. In our paper, we have shown that normalization alters the training process of deep neural networks by controlling the model's complexity and reducing the risk of overfitting to the training data. This, in turn, helps improve the generalization performance of the model on out-of-distribution datasets. Results demonstrate that spectral normalization improves uncertainty estimates and significantly outperforms both the layer normalization and model without normalization in single-step prediction horizons. This improved performance can be attributed to the ability of spectral normalization to better localize the feature space of the data under perturbations. Our findings are especially relevant to traffic management applications, where predicting traffic conditions across multiple locations is the goal, but the availability of training data from multiple locations is limited. Spectral normalization, therefore, provides a more generalizable approach that can effectively capture the underlying patterns in traffic data without requiring location-specific models. | [] | Train |
43,745 | 24 | Title: Annealing Self-Distillation Rectification Improves Adversarial Training
Abstract: In standard adversarial training, models are optimized to fit one-hot labels within allowable adversarial perturbation budgets. However, the ignorance of underlying distribution shifts brought by perturbations causes the problem of robust overfitting. To address this issue and enhance adversarial robustness, we analyze the characteristics of robust models and identify that robust models tend to produce smoother and well-calibrated outputs. Based on the observation, we propose a simple yet effective method, Annealing Self-Distillation Rectification (ADR), which generates soft labels as a better guidance mechanism that accurately reflects the distribution shift under attack during adversarial training. By utilizing ADR, we can obtain rectified distributions that significantly improve model robustness without the need for pre-trained models or extensive extra computation. Moreover, our method facilitates seamless plug-and-play integration with other adversarial training techniques by replacing the hard labels in their objectives. We demonstrate the efficacy of ADR through extensive experiments and strong performances across datasets. | [
10606
] | Test |
43,746 | 30 | Title: FIREBALL: A Dataset of Dungeons and Dragons Actual-Play with Structured Game State Information
Abstract: Dungeons & Dragons (D&D) is a tabletop roleplaying game with complex natural language interactions between players and hidden state information.Recent work has shown that large language models (LLMs) that have access to state information can generate higher quality game turns than LLMs that use dialog history alone.However, previous work used game state information that was heuristically created and was not a true gold standard game state. We present FIREBALL, a large dataset containing nearly 25,000 unique sessions from real D&D gameplay on Discord with true game state info. We recorded game play sessions of players who used the Avrae bot, which was developed to aid people in playing D&D online, capturing language, game commands and underlying game state information. We demonstrate that FIREBALL can improve natural language generation (NLG) by using Avrae state information, improving both automated metrics and human judgments of quality.Additionally, we show that LLMs can generate executable Avrae commands, particularly after finetuning. | [
14145,
20995
] | Validation |
43,747 | 23 | Title: The BET project: Behavior-enabled IoT
Abstract: IoT is changing the way Internet is used due to the availability of a large amount of data timely collected from every-day life objects. Designing applications in this new scenario poses new challenges. This extended abstract discusses them and presents the objective of the BeT project whose main aim is to introduce a reference architecture, a conceptual framework, and related techniques to design behavior-enabled IoT systems and applications. | [] | Train |
43,748 | 16 | Title: MMANet: Margin-Aware Distillation and Modality-Aware Regularization for Incomplete Multimodal Learning
Abstract: Multimodal learning has shown great potentials in numerous scenes and attracts increasing interest recently. However, it often encounters the problem of missing modality data and thus suffers severe performance degradation in practice. To this end, we propose a general framework called MMANet to assist incomplete multimodal learning. It consists of three components: the deployment network used for inference, the teacher network transferring comprehensive multimodal information to the deployment network, and the regularization network guiding the deployment network to balance weak modality combinations. Specifically, we propose a novel margin-aware distillation (MAD) to assist the information transfer by weighing the sample contribution with the classification uncertainty. This encourages the deployment network to focus on the samples near decision boundaries and acquire the refined inter-class margin. Besides, we design a modality-aware regularization (MAR) algorithm to mine the weak modality combinations and guide the regularization network to calculate prediction loss for them. This forces the deployment network to improve its representation ability for the weak modality combinations adaptively. Finally, extensive experiments on multimodal classification and segmentation tasks demonstrate that our MMANet outperforms the state-of-the-art significantly. Code is available at: https://github.com/shicaiwei123/MMANet | [] | Train |
43,749 | 24 | Title: Better Training of GFlowNets with Local Credit and Incomplete Trajectories
Abstract: Generative Flow Networks or GFlowNets are related to Monte-Carlo Markov chain methods (as they sample from a distribution specified by an energy function), reinforcement learning (as they learn a policy to sample composed objects through a sequence of steps), generative models (as they learn to represent and sample from a distribution) and amortized variational methods (as they can be used to learn to approximate and sample from an otherwise intractable posterior, given a prior and a likelihood). They are trained to generate an object $x$ through a sequence of steps with probability proportional to some reward function $R(x)$ (or $\exp(-\mathcal{E}(x))$ with $\mathcal{E}(x)$ denoting the energy function), given at the end of the generative trajectory. Like for other RL settings where the reward is only given at the end, the efficiency of training and credit assignment may suffer when those trajectories are longer. With previous GFlowNet work, no learning was possible from incomplete trajectories (lacking a terminal state and the computation of the associated reward). In this paper, we consider the case where the energy function can be applied not just to terminal states but also to intermediate states. This is for example achieved when the energy function is additive, with terms available along the trajectory. We show how to reparameterize the GFlowNet state flow function to take advantage of the partial reward already accrued at each state. This enables a training objective that can be applied to update parameters even with incomplete trajectories. Even when complete trajectories are available, being able to obtain more localized credit and gradients is found to speed up training convergence, as demonstrated across many simulations. | [
17762,
36006,
40424,
35628,
15438,
24784,
9778,
32279,
21081
] | Test |
43,750 | 6 | Title: Auto-Paízo Games: Towards Understanding the Design of Games that Aim to Unify a Player's Physical Body and the Virtual World
Abstract: Most digital bodily games focus on the body as they use movement as input. However, they also draw the player's focus away from the body as the output occurs on visual displays, creating a divide between the physical body and the virtual world. We propose a novel approach - the"Body as a Play Material"- where a player uses their body as both input and output to unify the physical body and the virtual world. To showcase this approach, we designed three games where a player uses one of their hands (input) to play against the other hand (output) by loaning control over its movements to an Electrical Muscle Stimulation (EMS) system. We conducted a thematic analysis on the data obtained from a field study with 12 participants to articulate four player experience themes. We discuss our results about how participants appreciated the engagement with the variety of bodily movements for play and the ambiguity of using their body as a play material. Ultimately, our work aims to unify the physical body and the virtual world. | [] | Train |
43,751 | 28 | Title: Multi-User Distributed Computing Via Compressed Sensing
Abstract: The multi-user linearly-separable distributed computing problem is considered here, in which N servers help to compute the real-valued functions requested by K users, where each function can be written as a linear combination of up to L (generally non-linear) subfunctions. Each server computes a fraction γ of the subfunctions, then communicates a function of its computed outputs to some of the users, and then each user collects its received data to recover its desired function. Our goal is to bound the ratio between the computation workload done by all servers over the number of datasets.To this end, we here reformulate the real-valued distributed computing problem into a matrix factorization problem and then into a basic sparse recovery problem, where sparsity implies computational savings. Building on this, we first give a simple probabilistic scheme for subfunction assignment, which allows us to upper bound the optimal normalized computation cost as $\gamma \leq \frac{K}{N}$ that a generally intractable ℓ0-minimization would give. To bypass the intractability of such optimal scheme, we show that if these optimal schemes enjoy $\gamma \leq - r\frac{K}{N}W_{ - 1}^{ - 1}\left( { - \frac{{2K}}{{eNr}}} \right)$ (where W−1 (•) is the Lambert function and r calibrates the communication between servers and users), then they can actually be derived using a tractable Basis Pursuit ℓ1-minimization. This newly-revealed connection opens up the possibility of designing practical distributed computing algorithms by employing tools and methods from compressed sensing. | [] | Train |
43,752 | 24 | Title: (Local) Differential Privacy has NO Disparate Impact on Fairness
Abstract: nan | [] | Train |
43,753 | 28 | Title: On Euclidean, Hermitian and symplectic quasi-cyclic complementary dual codes
Abstract: Linear complementary dual codes (LCD) intersect trivially with their dual. In this paper, we develop a new characterization for LCD codes, which allows us to judge the complementary duality of linear codes from the codeword level. Further, we determine the sufficient and necessary conditions for one-generator quasi-cyclic codes to be LCD codes involving Euclidean, Hermitian, and symplectic inner products. Finally, we constructed many Euclidean, Hermitian and symmetric LCD codes with excellent parameters, some improving the results in the literature. Remarkably, we construct a symplectic LCD $[28,6]_2$ code with symplectic distance $10$, which corresponds to an trace Hermitian additive complementary dual $(14,3,10)_4$ code that outperforms the optimal quaternary Hermitian LCD $[14,3,9]_4$ code. | [
35691,
7283
] | Validation |
43,754 | 24 | Title: Convergence of Alternating Gradient Descent for Matrix Factorization
Abstract: We consider alternating gradient descent (AGD) with fixed step size $\eta>0$, applied to the asymmetric matrix factorization objective. We show that, for a rank-$r$ matrix $\mathbf{A} \in \mathbb{R}^{m \times n}$, $T = \left( \left(\frac{\sigma_1(\mathbf{A})}{\sigma_r(\mathbf{A})}\right)^2 \log(1/\epsilon)\right)$ iterations of alternating gradient descent suffice to reach an $\epsilon$-optimal factorization $\| \mathbf{A} - \mathbf{X}_T^{\vphantom{\intercal}} \mathbf{Y}_T^{\intercal} \|_{\rm F}^2 \leq \epsilon \| \mathbf{A} \|_{\rm F}^2$ with high probability starting from an atypical random initialization. The factors have rank $d>r$ so that $\mathbf{X}_T\in\mathbb{R}^{m \times d}$ and $\mathbf{Y}_T \in\mathbb{R}^{n \times d}$. Experiments suggest that our proposed initialization is not merely of theoretical benefit, but rather significantly improves convergence of gradient descent in practice. Our proof is conceptually simple: a uniform PL-inequality and uniform Lipschitz smoothness constant are guaranteed for a sufficient number of iterations, starting from our random initialization. Our proof method should be useful for extending and simplifying convergence analyses for a broader class of nonconvex low-rank factorization problems. | [] | Test |
43,755 | 20 | Title: The Localized Union-of-Balls Bifiltration
Abstract: We propose an extension of the classical union-of-balls filtration of persistent homology: fixing a point $q$, we focus our attention to a ball centered at $q$ whose radius is controlled by a second scale parameter. We discuss an absolute variant, where the union is just restricted to the $q$-ball, and a relative variant where the homology of the $q$-ball relative to its boundary is considered. Interestingly, these natural constructions lead to bifiltered simplicial complexes which are not $k$-critical for any finite $k$. Nevertheless, we demonstrate that these bifiltrations can be computed exactly and efficiently, and we provide a prototypical implementation using the CGAL library. We also argue that some of the recent algorithmic advances for $2$-parameter persistence (which usually assume $k$-criticality for some finite $k$) carry over to the $\infty$-critical case. | [] | Train |
43,756 | 30 | Title: Think Rationally about What You See: Continuous Rationale Extraction for Relation Extraction
Abstract: Relation extraction (RE) aims to extract potential relations according to the context of two entities, thus, deriving rational contexts from sentences plays an important role. Previous works either focus on how to leverage the entity information (e.g., entity types, entity verbalization) to inference relations, but ignore context-focused content, or use counterfactual thinking to remove the model's bias of potential relations in entities, but the relation reasoning process will still be hindered by irrelevant content. Therefore, how to preserve relevant content and remove noisy segments from sentences is a crucial task. In addition, retained content needs to be fluent enough to maintain semantic coherence and interpretability. In this work, we propose a novel rationale extraction framework named RE2, which leverages two continuity and sparsity factors to obtain relevant and coherent rationales from sentences. To solve the problem that the gold rationales are not labeled, RE2 applies an optimizable binary mask to each token in the sentence, and adjust the rationales that need to be selected according to the relation label. Experiments on four datasets show that RE2 surpasses baselines. | [
35440,
25106,
15300,
1326
] | Train |
43,757 | 34 | Title: Optimal distance query reconstruction for graphs without long induced cycles
Abstract: Let $G=(V,E)$ be an $n$-vertex connected graph of maximum degree $\Delta$. Given access to $V$ and an oracle that given two vertices $u,v\in V$, returns the shortest path distance between $u$ and $v$, how many queries are needed to reconstruct $E$? We give a simple deterministic algorithm to reconstruct trees using $\Delta n\log_\Delta n+(\Delta+2)n$ distance queries and show that even randomised algorithms need to use at least $\frac1{200} \Delta n\log_\Delta n$ queries in expectation. The best previous lower bound was an information-theoretic lower bound of $\Omega(n\log n/\log \log n)$. Our lower bound also extends to related query models including distance queries for phylogenetic trees, membership queries for learning partitions and path queries in directed trees. We extend our deterministic algorithm to reconstruct graphs without induced cycles of length at least $k$ using $O_{\Delta,k}(n\log n)$ queries, which includes various graph classes of interest such as chordal graphs, permutation graphs and AT-free graphs. Since the previously best known randomised algorithm for chordal graphs uses $O_{\Delta}(n\log^2 n)$ queries in expectation, we both get rid off the randomness and get the optimal dependency in $n$ for chordal graphs and various other graph classes. Finally, we build on an algorithm of Kannan, Mathieu, and Zhou [ICALP, 2015] to give a randomised algorithm for reconstructing graphs of treelength $k$ using $O_{\Delta,k}(n\log^2n)$ queries in expectation. | [] | Validation |
43,758 | 24 | Title: The 3D Structural Phenotype of the Glaucomatous Optic Nerve Head and its Relationship with The Severity of Visual Field Damage
Abstract: Purpose: To describe the 3D structural changes in both connective and neural tissues of the optic nerve head (ONH) that occur concurrently at different stages of glaucoma using traditional and AI-driven approaches. Design: Retrospective cross-sectional study. Methods: We included 213 normal, 204 mild glau coma (mean deviation [MD] ≥ -6.00 dB), 118 moderate glaucoma (MD of -6.01 to -12.00 dB), and 118 advanced glaucoma patients (MD < -12.00 dB). All subjects had their ONHs imaged in 3D with Spectralis optical coherence tomography. To describe the 3D structural phenotype of glaucoma as a function of severity, we used two different approaches: (1) We extracted ‘human - defined’ 3D structural parameters of the ONH (total of 10) including retinal nerve fiber layer (RNFL) thickness, minimum rim width, lamina cribrosa (LC) shape and depth at different stages of glaucoma; (2) we also employed a geometric deep learning method (i.e. PointNet) to identify the most important 3D structural features that differentiate ONHs from different glaucoma severity groups without any human input. Results: We observed that the majority of ONH structural changes occurred in the early glaucoma stage, followed by a plateau effect in the later stages. Using PointNet, we also found that 3D ONH structural changes were present in both neural and connective tissues. Specifically, 57% (normal to mild glaucoma), 39% (mild to moderate glaucoma), and 53% (moderate to advanced glaucoma) of ONH landmarks that showed major structural changes were located in neural tissues with the remaining located in connective tissues. In both approaches, we observed that structural changes were more prominent in the superior and inferior quadrant of the ONH, particularly in the RNFL, the prelamina, and the LC. As the | [] | Validation |
43,759 | 6 | Title: Towards Human-AI Collaborative Urban Science Research Enabled by Pre-trained Large Language Models
Abstract: Pre-trained large language models (PLMs) have the potential to support urban science research through content creation, information extraction, assisted programming, text classification, and other technical advances. In this research, we explored the opportunities, challenges, and prospects of PLMs in urban science research. Specifically, we discussed potential applications of PLMs to urban institution, urban space, urban information, and citizen behaviors research through seven examples using ChatGPT. We also examined the challenges of PLMs in urban science research from both technical and social perspectives. The prospects of the application of PLMs in urban science research were then proposed. We found that PLMs can effectively aid in understanding complex concepts in urban science, facilitate urban spatial form identification, assist in disaster monitoring, and sense public sentiment. At the same time, however, the applications of PLMs in urban science research face evident threats, such as technical limitations, security, privacy, and social bias. The development of fundamental models based on domain knowledge and human-AI collaboration may help improve PLMs to support urban science research in future. | [
11273,
25227,
35388
] | Train |
43,760 | 26 | Title: A Multi-factorial Analysis of Polarization on Social Media
Abstract: Polarization is an increasingly worrying phenomenon within social media. Recent work has made it possible to detect and even quantify polarization. Nevertheless, the few existing metrics, although defined in a continuous space, often lead to a unimodal distribution of data once applied to users’ interactions, making the distinction between polarized and non-polarized users difficult to draw. Furthermore, each metric relies on a single factor and does not reflect the overall user behavior. Modeling polarization in a single form runs the risk of obscuring inter-individual differences. In this paper, we propose to have a deeper look at polarized online behaviors and to compare individual metrics. We collected about 300K retweets from 1K French users between January and July 2022 on Twitter. Each retweet is related to the highly controversial vaccine debate. Results show that a multi-factorial analysis leads to the identification of distinct and potentially explainable behavioral classes. This finer understanding of behaviors is an essential step to adapt news recommendation strategies so that no user gets locked into an echo chamber or filter bubble. | [] | Validation |
43,761 | 24 | Title: Loss minimization yields multicalibration for large neural networks
Abstract: Multicalibration is a notion of fairness that aims to provide accurate predictions across a large set of groups. Multicalibration is known to be a different goal than loss minimization, even for simple predictors such as linear functions. In this note, we show that for (almost all) large neural network sizes, optimally minimizing squared error leads to multicalibration. Our results are about representational aspects of neural networks, and not about algorithmic or sample complexity considerations. Previous such results were known only for predictors that were nearly Bayes-optimal and were therefore representation independent. We emphasize that our results do not apply to specific algorithms for optimizing neural networks, such as SGD, and they should not be interpreted as"fairness comes for free from optimizing neural networks". | [
19937,
29754,
29375
] | Validation |
43,762 | 14 | Title: Some New Non-Commutative Matrix Multiplication Algorithms of Size (n, m, 6)
Abstract: For various $2\leq n,m \leq 6$, we propose some new algorithms for multiplying an $n\times m$ matrix with an $m \times 6$ matrix over a possibly noncommutative coefficient ring. | [] | Test |
43,763 | 4 | Title: Machine Unlearning: A Survey
Abstract: Machine learning has attracted widespread attention and evolved into an enabling technology for a wide range of highly successful applications, such as intelligent computer vision, speech recognition, medical diagnosis, and more. Yet a special need has arisen where, due to privacy, usability, and/or the right to be forgotten, information about some specific samples needs to be removed from a model, called machine unlearning. This emerging technology has drawn significant interest from both academics and industry due to its innovation and practicality. At the same time, this ambitious problem has led to numerous research efforts aimed at confronting its challenges. To the best of our knowledge, no study has analyzed this complex topic or compared the feasibility of existing unlearning solutions in different kinds of scenarios. Accordingly, with this survey, we aim to capture the key concepts of unlearning techniques. The existing solutions are classified and summarized based on their characteristics within an up-to-date and comprehensive review of each category’s advantages and limitations. The survey concludes by highlighting some of the outstanding issues with unlearning techniques, along with some feasible directions for new research opportunities. | [] | Train |
43,764 | 30 | Title: Creating Large Language Model Resistant Exams: Guidelines and Strategies
Abstract: The proliferation of Large Language Models (LLMs), such as ChatGPT, has raised concerns about their potential impact on academic integrity, prompting the need for LLM-resistant exam designs. This article investigates the performance of LLMs on exams and their implications for assessment, focusing on ChatGPT's abilities and limitations. We propose guidelines for creating LLM-resistant exams, including content moderation, deliberate inaccuracies, real-world scenarios beyond the model's knowledge base, effective distractor options, evaluating soft skills, and incorporating non-textual information. The article also highlights the significance of adapting assessments to modern tools and promoting essential skills development in students. By adopting these strategies, educators can maintain academic integrity while ensuring that assessments accurately reflect contemporary professional settings and address the challenges and opportunities posed by artificial intelligence in education. | [
45659
] | Train |
43,765 | 6 | Title: Alexa Arena: A User-Centric Interactive Platform for Embodied AI
Abstract: We introduce Alexa Arena, a user-centric simulation platform for Embodied AI (EAI) research. Alexa Arena provides a variety of multi-room layouts and interactable objects, for the creation of human-robot interaction (HRI) missions. With user-friendly graphics and control mechanisms, Alexa Arena supports the development of gamified robotic tasks readily accessible to general human users, thus opening a new venue for high-efficiency HRI data collection and EAI system evaluation. Along with the platform, we introduce a dialog-enabled instruction-following benchmark and provide baseline results for it. We make Alexa Arena publicly available to facilitate research in building generalizable and assistive embodied agents. | [
22045,
41566
] | Train |
43,766 | 30 | Title: Can the Inference Logic of Large Language Models be Disentangled into Symbolic Concepts?
Abstract: In this paper, we explain the inference logic of large language models (LLMs) as a set of symbolic concepts. Many recent studies have discovered that traditional DNNs usually encode sparse symbolic concepts. However, because an LLM has much more parameters than traditional DNNs, whether the LLM also encodes sparse symbolic concepts is still an open problem. Therefore, in this paper, we propose to disentangle the inference score of LLMs for dialogue tasks into a small number of symbolic concepts. We verify that we can use those sparse concepts to well estimate all inference scores of the LLM on all arbitrarily masking states of the input sentence. We also evaluate the transferability of concepts encoded by an LLM and verify that symbolic concepts usually exhibit high transferability across similar input sentences. More crucially, those symbolic concepts can be used to explain the exact reasons accountable for the LLM's prediction errors. | [
33097,
4705,
5756,
13001
] | Train |
43,767 | 10 | Title: Identifiability of direct effects from summary causal graphs
Abstract: Dynamic structural causal models (SCMs) are a powerful framework for reasoning in dynamic systems about direct effects which measure how a change in one variable affects another variable while holding all other variables constant. The causal relations in a dynamic structural causal model can be qualitatively represented with a full-time causal graph. Assuming linearity and causal sufficiency and given the full-time causal graph, the direct causal effect is always identifiable and can be estimated from data by adjusting on any set of variables given by the so-called single-door criterion. However, in many application such a graph is not available for various reasons but nevertheless experts have access to an abstraction of the full-time causal graph which represents causal relations between time series while omitting temporal information. This paper presents a complete identifiability result which characterizes all cases for which the direct effect is graphically identifiable from summary causal graphs and gives two sound finite adjustment sets that can be used to estimate the direct effect whenever it is identifiable. | [
44750
] | Test |
43,768 | 16 | Title: EverLight: Indoor-Outdoor Editable HDR Lighting Estimation
Abstract: Because of the diversity in lighting environments, existing illumination estimation techniques have been designed explicitly on indoor or outdoor environments. Methods have focused specifically on capturing accurate energy (e.g., through parametric lighting models), which emphasizes shading and strong cast shadows; or producing plausible texture (e.g., with GANs), which prioritizes plausible reflections. Approaches which provide editable lighting capabilities have been proposed, but these tend to be with simplified lighting models, offering limited realism. In this work, we propose to bridge the gap between these recent trends in the literature, and propose a method which combines a parametric light model with 360{\deg} panoramas, ready to use as HDRI in rendering engines. We leverage recent advances in GAN-based LDR panorama extrapolation from a regular image, which we extend to HDR using parametric spherical gaussians. To achieve this, we introduce a novel lighting co-modulation method that injects lighting-related features throughout the generator, tightly coupling the original or edited scene illumination within the panorama generation process. In our representation, users can easily edit light direction, intensity, number, etc. to impact shading while providing rich, complex reflections while seamlessly blending with the edits. Furthermore, our method encompasses indoor and outdoor environments, demonstrating state-of-the-art results even when compared to domain-specific methods. | [] | Train |
43,769 | 23 | Title: Log Parsing with Prompt-based Few-shot Learning
Abstract: Logs generated by large-scale software systems provide crucial information for engineers to understand the system status and diagnose problems of the systems. Log parsing, which converts raw log messages into structured data, is the first step to enabling automated log analytics. Existing log parsers extract the common part as log templates using statistical features. However, these log parsers often fail to identify the correct templates and parameters because: 1) they often overlook the semantic meaning of log messages, and 2) they require domain-specific knowledge for different log datasets. To address the limitations of existing methods, in this paper, we propose LogPPT to capture the patterns of templates using prompt-based few-shot learning. LogPPT utilises a novel prompt tuning method to recognise keywords and parameters based on a few labelled log data. In addition, an adaptive random sampling algorithm is designed to select a small yet diverse training set. We have conducted extensive experiments on 16 public log datasets. The experimental results show that LogPPT is effective and efficient for log parsing. | [
19032,
35289,
40846,
7096,
29465
] | Train |
43,770 | 23 | Title: On the Benefits of Semi-Supervised Test Case Generation for Cyber-Physical Systems
Abstract: Testing complex Cyber Physical Systems (CPSs) can be expensive and time consuming. Current state-of-the-art methods that explore this problem are fully-supervised; i.e. they require that all examples are labeled. On the other hand, the GenClu system (introduced in this paper) takes a semi-supervised approach; i.e. (a) only a small subset of information is actually labeled (via simulation) and (b) those labels are then spread across the rest of the data. When applied to five open-source CPSs, GenClu's test generation can be multiple orders of magnitude faster than the prior state of the art. Further, when assessed via mutation testing, tests generated by GenClu were as good or better than anything else tested here. Hence, we recommend semi-supervised methods over prior methods (evolutionary search and fully-supervised learning). | [
45292
] | Train |
43,771 | 16 | Title: Human Preference Score v2: A Solid Benchmark for Evaluating Human Preferences of Text-to-Image Synthesis
Abstract: Recent text-to-image generative models can generate high-fidelity images from text inputs, but the quality of these generated images cannot be accurately evaluated by existing evaluation metrics. To address this issue, we introduce Human Preference Dataset v2 (HPD v2), a large-scale dataset that captures human preferences on images from a wide range of sources. HPD v2 comprises 798,090 human preference choices on 430,060 pairs of images, making it the largest dataset of its kind. The text prompts and images are deliberately collected to eliminate potential bias, which is a common issue in previous datasets. By fine-tuning CLIP on HPD v2, we obtain Human Preference Score v2 (HPS v2), a scoring model that can more accurately predict text-generated images' human preferences. Our experiments demonstrate that HPS v2 generalizes better than previous metrics across various image distributions and is responsive to algorithmic improvements of text-to-image generative models, making it a preferable evaluation metric for these models. We also investigate the design of the evaluation prompts for text-to-image generative models, to make the evaluation stable, fair and easy-to-use. Finally, we establish a benchmark for text-to-image generative models using HPS v2, which includes a set of recent text-to-image models from the academia, community and industry. The code and dataset is / will be available at https://github.com/tgxs002/HPSv2. | [
37152,
6145,
40330,
34701,
20435,
36084
] | Train |
43,772 | 16 | Title: MindDiffuser: Controlled Image Reconstruction from Human Brain Activity with Semantic and Structural Diffusion
Abstract: Reconstructing visual stimuli from measured functional magnetic resonance imaging (fMRI) has been a meaningful and challenging task. Previous studies have successfully achieved reconstructions with structures similar to the original images, such as the outlines and size of some natural images. However, these reconstructions lack explicit semantic information and are difficult to discern. In recent years, many studies have utilized multi-modal pre-trained models with stronger generative capabilities to reconstruct images that are semantically similar to the original ones. However, these images have uncontrollable structural information such as position and orientation. To address both of the aforementioned issues simultaneously, we propose a two-stage image reconstruction model called MindDiffuser, utilizing Stable Diffusion. In Stage 1, the VQ-VAE latent representations and the CLIP text embeddings decoded from fMRI are put into the image-to-image process of Stable Diffusion, which yields a preliminary image that contains semantic and structural information. In Stage 2, we utilize the low-level CLIP visual features decoded from fMRI as supervisory information, and continually adjust the two features in Stage 1 through backpropagation to align the structural information. The results of both qualitative and quantitative analyses demonstrate that our proposed model has surpassed the current state-of-the-art models in terms of reconstruction results on Natural Scenes Dataset (NSD). Furthermore, the results of ablation experiments indicate that each component of our model is effective for image reconstruction. | [
10813
] | Test |
43,773 | 8 | Title: Cross-Layer Assisted Early Congestion Control for Cloud VR Services in 5G Edge Network
Abstract: Cloud virtual reality (VR) has emerged as a promising technology, offering users a highly immersive and easily accessible experience. However, the current 5G radio access network faces challenges in accommodating the bursty traffic generated by multiple cloudVR flows simultaneously, leading to congestion at the 5G base station and increased delays. In this research, we present a comprehensive quantitative analysis that highlights the underlying causes for the poor delay performance of cloudVR flows within the existing 5G protocol stack and network. To address these issues, we propose a novel cross-layer informationassisted congestion control mechanism deployed in the 5G edge network. Experiment results show that our mechanism enhances the number of concurrent flows meeting delay standards by 1.5x to 2.5x, while maintaining a smooth network load. These findings underscore the potential of leveraging 5G edge nodes as a valuable resource to effectively meet the anticipated demands of future services. | [] | Validation |
43,774 | 24 | Title: Client Selection for Generalization in Accelerated Federated Learning: A Multi-Armed Bandit Approach
Abstract: Federated learning (FL) is an emerging machine learning (ML) paradigm used to train models across multiple nodes (i.e., clients) holding local data sets, without explicitly exchanging the data. It has attracted a growing interest in recent years due to its advantages in terms of privacy considerations, and communication resources. In FL, selected clients train their local models and send a function of the models to the server, which consumes a random processing and transmission time. The server updates the global model and broadcasts it back to the clients. The client selection problem in FL is to schedule a subset of the clients for training and transmission at each given time so as to optimize the learning performance. In this paper, we present a novel multi-armed bandit (MAB)-based approach for client selection to minimize the training latency without harming the ability of the model to generalize, that is, to provide reliable predictions for new observations. We develop a novel algorithm to achieve this goal, dubbed Bandit Scheduling for FL (BSFL). We analyze BSFL theoretically, and show that it achieves a logarithmic regret, defined as the loss of BSFL as compared to a genie that has complete knowledge about the latency means of all clients. Furthermore, simulation results using synthetic and real datasets demonstrate that BSFL is superior to existing methods. | [] | Train |
43,775 | 24 | Title: Cached Operator Reordering: A Unified View for Fast GNN Training
Abstract: Graph Neural Networks (GNNs) are a powerful tool for handling structured graph data and addressing tasks such as node classification, graph classification, and clustering. However, the sparse nature of GNN computation poses new challenges for performance optimization compared to traditional deep neural networks. We address these challenges by providing a unified view of GNN computation, I/O, and memory. By analyzing the computational graphs of the Graph Convolutional Network (GCN) and Graph Attention (GAT) layers -- two widely used GNN layers -- we propose alternative computation strategies. We present adaptive operator reordering with caching, which achieves a speedup of up to 2.43x for GCN compared to the current state-of-the-art. Furthermore, an exploration of different caching schemes for GAT yields a speedup of up to 1.94x. The proposed optimizations save memory, are easily implemented across various hardware platforms, and have the potential to alleviate performance bottlenecks in training large-scale GNN models. | [] | Train |
43,776 | 30 | Title: Cross-Lingual NER for Financial Transaction Data in Low-Resource Languages
Abstract: We propose an efficient modeling framework for cross-lingual named entity recognition in semi-structured text data. Our approach relies on both knowledge distillation and consistency training. The modeling framework leverages knowledge from a large language model (XLMRoBERTa) pre-trained on the source language, with a student-teacher relationship (knowledge distillation). The student model incorporates unsupervised consistency training (with KL divergence loss) on the low-resource target language. We employ two independent datasets of SMSs in English and Arabic, each carrying semi-structured banking transaction information, and focus on exhibiting the transfer of knowledge from English to Arabic. With access to only 30 labeled samples, our model can generalize the recognition of merchants, amounts, and other fields from English to Arabic. We show that our modeling approach, while efficient, performs best overall when compared to state-of-the-art approaches like DistilBERT pre-trained on the target language or a supervised model directly trained on labeled data in the target language. Our experiments show that it is enough to learn to recognize entities in English to reach reasonable performance in a low-resource language in the presence of a few labeled samples of semi-structured data. The proposed framework has implications for developing multi-lingual applications, especially in geographies where digital endeavors rely on both English and one or more low-resource language(s), sometimes mixed with English or employed singly. | [] | Train |
43,777 | 27 | Title: Disturbance Injection Under Partial Automation: Robust Imitation Learning for Long-Horizon Tasks
Abstract: Partial Automation (PA) with intelligent support systems has been introduced in industrial machinery and advanced automobiles to reduce the burden of long hours of human operation. Under PA, operators perform manual operations (providing actions) and operations that switch to automatic/manual mode (mode-switching). Since PA reduces the total duration of manual operation, these two action and mode-switching operations can be replicated by imitation learning with high sample efficiency. To this end, this letter proposes Disturbance Injection under Partial Automation (DIPA) as a novel imitation learning framework. In DIPA, mode and actions (in the manual mode) are assumed to be observables in each state and are used to learn both action and mode-switching policies. The above learning is robustified by injecting disturbances into the operator's actions to optimize the disturbance's level for minimizing the covariate shift under PA. We experimentally validated the effectiveness of our method for long-horizon tasks in two simulations and a real robot environment and confirmed that our method outperformed the previous methods and reduced the demonstration burden. | [] | Train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.