id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2106.04993 | Initialization Matters: Regularizing Manifold-informed Initialization
for Neural Recommendation Systems | Proper initialization is crucial to the optimization and the generalization of neural networks. However, most existing neural recommendation systems initialize the user and item embeddings randomly. In this work, we propose a new initialization scheme for user and item embeddings called Laplacian Eigenmaps with Popularity-based Regularization for Isolated Data (LEPORID). LEPORID endows the embeddings with information regarding multi-scale neighborhood structures on the data manifold and performs adaptive regularization to compensate for high embedding variance on the tail of the data distribution. Exploiting matrix sparsity, LEPORID embeddings can be computed efficiently. We evaluate LEPORID in a wide range of neural recommendation models. In contrast to the recent surprising finding that the simple K-nearest-neighbor (KNN) method often outperforms neural recommendation systems, we show that existing neural systems initialized with LEPORID often perform on par or better than KNN. To maximize the effects of the initialization, we propose the Dual-Loss Residual Recommendation (DLR2) network, which, when initialized with LEPORID, substantially outperforms both traditional and state-of-the-art neural recommender systems. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 239,934 |
2108.05430 | Loci of Poncelet Triangles in the General Closure Case | We analyze loci of triangle centers over variants of two-well known triangle porisms: the bicentric and confocal families. Specifically, we evoke the general version of Poncelet's closure theorem whereby individual sides can be made tangent to separate in-pencil caustics. We show that despite the more complicated dynamic geometry, the locus of certain triangle centers and associated points remain conics and/or circles. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 250,298 |
1801.00634 | High Dimensional Spaces, Deep Learning and Adversarial Examples | In this paper, we analyze deep learning from a mathematical point of view and derive several novel results. The results are based on intriguing mathematical properties of high dimensional spaces. We first look at perturbation based adversarial examples and show how they can be understood using topological and geometrical arguments in high dimensions. We point out mistake in an argument presented in prior published literature, and we present a more rigorous, general and correct mathematical result to explain adversarial examples in terms of topology of image manifolds. Second, we look at optimization landscapes of deep neural networks and examine the number of saddle points relative to that of local minima. Third, we show how multiresolution nature of images explains perturbation based adversarial examples in form of a stronger result. Our results state that expectation of $L_2$-norm of adversarial perturbations is $O\left(\frac{1}{\sqrt{n}}\right)$ and therefore shrinks to 0 as image resolution $n$ becomes arbitrarily large. Finally, by incorporating the parts-whole manifold learning hypothesis for natural images, we investigate the working of deep neural networks and root causes of adversarial examples and discuss how future improvements can be made and how adversarial examples can be eliminated. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 87,605 |
2206.10087 | Bio-inspired Neural Network-based Optimal Path Planning for UUVs under
the Effect of Ocean Currents | To eliminate the effect of ocean currents when addressing the optimal path in the underwater environment, an intelligent algorithm designed for the unmanned underwater vehicle (UUV) is proposed in this paper. The algorithm consists of two parts: a neural network-based algorithm that deducts the shortest path and avoids all possible collisions; and an adjusting component that balances off the deviation brought by the effect of ocean currents. The optimization results of the proposed algorithm are presented in detail, and compared with the path planning algorithm that does not consider the effect of currents. Results of the comparison prove the effectiveness of the path planning method when encountering currents of different directions and velocities. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 303,786 |
2206.04516 | Accurate Node Feature Estimation with Structured Variational Graph
Autoencoder | Given a graph with partial observations of node features, how can we estimate the missing features accurately? Feature estimation is a crucial problem for analyzing real-world graphs whose features are commonly missing during the data collection process. Accurate estimation not only provides diverse information of nodes but also supports the inference of graph neural networks that require the full observation of node features. However, designing an effective approach for estimating high-dimensional features is challenging, since it requires an estimator to have large representation power, increasing the risk of overfitting. In this work, we propose SVGA (Structured Variational Graph Autoencoder), an accurate method for feature estimation. SVGA applies strong regularization to the distribution of latent variables by structured variational inference, which models the prior of variables as Gaussian Markov random field based on the graph structure. As a result, SVGA combines the advantages of probabilistic inference and graph neural networks, achieving state-of-the-art performance in real datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 301,657 |
2401.04965 | ConvConcatNet: a deep convolutional neural network to reconstruct mel
spectrogram from the EEG | To investigate the processing of speech in the brain, simple linear models are commonly used to establish a relationship between brain signals and speech features. However, these linear models are ill-equipped to model a highly dynamic and complex non-linear system like the brain. Although non-linear methods with neural networks have been developed recently, reconstructing unseen stimuli from unseen subjects' EEG is still a highly challenging task. This work presents a novel method, ConvConcatNet, to reconstruct mel-specgrams from EEG, in which the deep convolution neural network and extensive concatenation operation were combined. With our ConvConcatNet model, the Pearson correlation between the reconstructed and the target mel-spectrogram can achieve 0.0420, which was ranked as No.1 in the Task 2 of the Auditory EEG Challenge. The codes and models to implement our work will be available on Github: https://github.com/xuxiran/ConvConcatNet | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 420,605 |
2312.14897 | Improving the Design of Linear Controllers for Homogeneous Platooning
under Disturbances | This paper addresses the problem of longitudinal platooning control of homogeneous vehicles subject to external disturbances, such as wind gusts, road slopes, and parametric uncertainties. Our control objective is to maintain the relative distance of the cars regarding their nearby teammates in a decentralized manner. Therefore, we proposed a novel control law to compute the acceleration commands of each vehicle that includes the integral of the spacing error, which endows the controller with the capability to mitigate external disturbances in steady-state conditions. We adopt a constant distance spacing policy and employ generalized look-ahead and bidirectional network topologies. We provide formal conditions for the controller synthesis that ensure the internal stability of the platoon under the proposed control law in the presence of constant and bounded disturbances affecting multiple vehicles. Experiments considering nonlinear vehicle models in the high-fidelity CARLA simulator environment under different disturbances, parametric uncertainties, and several network topologies demonstrate the effectiveness of our approach. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 417,787 |
2010.08853 | From Local Structures to Size Generalization in Graph Neural Networks | Graph neural networks (GNNs) can process graphs of different sizes, but their ability to generalize across sizes, specifically from small to large graphs, is still not well understood. In this paper, we identify an important type of data where generalization from small to large graphs is challenging: graph distributions for which the local structure depends on the graph size. This effect occurs in multiple important graph learning domains, including social and biological networks. We first prove that when there is a difference between the local structures, GNNs are not guaranteed to generalize across sizes: there are "bad" global minima that do well on small graphs but fail on large graphs. We then study the size-generalization problem empirically and demonstrate that when there is a discrepancy in local structure, GNNs tend to converge to non-generalizing solutions. Finally, we suggest two approaches for improving size generalization, motivated by our findings. Notably, we propose a novel Self-Supervised Learning (SSL) task aimed at learning meaningful representations of local structures that appear in large graphs. Our SSL task improves classification accuracy on several popular datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 201,326 |
2411.15283 | A Plug-and-Play Temporal Normalization Module for Robust Remote
Photoplethysmography | Remote photoplethysmography (rPPG) extracts PPG signals from subtle color changes in facial videos, showing strong potential for health applications. However, most rPPG methods rely on intensity differences between consecutive frames, missing long-term signal variations affected by motion or lighting artifacts, which reduces accuracy. This paper introduces Temporal Normalization (TN), a flexible plug-and-play module compatible with any end-to-end rPPG network architecture. By capturing long-term temporally normalized features following detrending, TN effectively mitigates motion and lighting artifacts, significantly boosting the rPPG prediction performance. When integrated into four state-of-the-art rPPG methods, TN delivered performance improvements ranging from 34.3% to 94.2% in heart rate measurement tasks across four widely-used datasets. Notably, TN showed even greater performance gains in smaller models. We further discuss and provide insights into the mechanisms behind TN's effectiveness. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 510,551 |
2008.13600 | $\beta$-Cores: Robust Large-Scale Bayesian Data Summarization in the
Presence of Outliers | Modern machine learning applications should be able to address the intrinsic challenges arising over inference on massive real-world datasets, including scalability and robustness to outliers. Despite the multiple benefits of Bayesian methods (such as uncertainty-aware predictions, incorporation of experts knowledge, and hierarchical modeling), the quality of classic Bayesian inference depends critically on whether observations conform with the assumed data generating model, which is impossible to guarantee in practice. In this work, we propose a variational inference method that, in a principled way, can simultaneously scale to large datasets, and robustify the inferred posterior with respect to the existence of outliers in the observed data. Reformulating Bayes theorem via the $\beta$-divergence, we posit a robustified pseudo-Bayesian posterior as the target of inference. Moreover, relying on the recent formulations of Riemannian coresets for scalable Bayesian inference, we propose a sparse variational approximation of the robustified posterior and an efficient stochastic black-box algorithm to construct it. Overall our method allows releasing cleansed data summaries that can be applied broadly in scenarios including structured data corruption. We illustrate the applicability of our approach in diverse simulated and real datasets, and various statistical models, including Gaussian mean inference, logistic and neural linear regression, demonstrating its superiority to existing Bayesian summarization methods in the presence of outliers. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 193,893 |
1603.08358 | Fast, Exact and Multi-Scale Inference for Semantic Image Segmentation
with Deep Gaussian CRFs | In this work we propose a structured prediction technique that combines the virtues of Gaussian Conditional Random Fields (G-CRF) with Deep Learning: (a) our structured prediction task has a unique global optimum that is obtained exactly from the solution of a linear system (b) the gradients of our model parameters are analytically computed using closed form expressions, in contrast to the memory-demanding contemporary deep structured prediction approaches that rely on back-propagation-through-time, (c) our pairwise terms do not have to be simple hand-crafted expressions, as in the line of works building on the DenseCRF, but can rather be `discovered' from data through deep architectures, and (d) out system can trained in an end-to-end manner. Building on standard tools from numerical analysis we develop very efficient algorithms for inference and learning, as well as a customized technique adapted to the semantic segmentation task. This efficiency allows us to explore more sophisticated architectures for structured prediction in deep learning: we introduce multi-resolution architectures to couple information across scales in a joint optimization framework, yielding systematic improvements. We demonstrate the utility of our approach on the challenging VOC PASCAL 2012 image segmentation benchmark, showing substantial improvements over strong baselines. We make all of our code and experiments available at {https://github.com/siddharthachandra/gcrf} | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 53,777 |
1909.11791 | Single-modal and Multi-modal False Arrhythmia Alarm Reduction using
Attention-based Convolutional and Recurrent Neural Networks | This study proposes a deep learning model that effectively suppresses the false alarms in the intensive care units (ICUs) without ignoring the true alarms using single- and multimodal biosignals. Most of the current work in the literature are either rule-based methods, requiring prior knowledge of arrhythmia analysis to build rules, or classical machine learning approaches, depending on hand-engineered features. In this work, we apply convolutional neural networks to automatically extract time-invariant features, an attention mechanism to put more emphasis on the important regions of the input segmented signal(s) that are more likely to contribute to an alarm, and long short-term memory units to capture the temporal information presented in the signal segments. We trained our method efficiently using a two-step training algorithm (i.e., pre-training and fine-tuning the proposed network) on the dataset provided by the PhysioNet computing in cardiology challenge 2015. The evaluation results demonstrate that the proposed method obtains better results compared to other existing algorithms for the false alarm reduction task in ICUs. The proposed method achieves a sensitivity of 93.88% and a specificity of 92.05% for the alarm classification, considering three different signals. In addition, our experiments for 5 separate alarm types leads significant results, where we just consider a single-lead ECG (e.g., a sensitivity of 90.71%, a specificity of 88.30%, an AUC of 89.51 for alarm type of Ventricular Tachycardia arrhythmia) | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 146,909 |
2404.11834 | Actor-Critic Reinforcement Learning with Phased Actor | Policy gradient methods in actor-critic reinforcement learning (RL) have become perhaps the most promising approaches to solving continuous optimal control problems. However, the trial-and-error nature of RL and the inherent randomness associated with solution approximations cause variations in the learned optimal values and policies. This has significantly hindered their successful deployment in real life applications where control responses need to meet dynamic performance criteria deterministically. Here we propose a novel phased actor in actor-critic (PAAC) method, aiming at improving policy gradient estimation and thus the quality of the control policy. Specifically, PAAC accounts for both $Q$ value and TD error in its actor update. We prove qualitative properties of PAAC for learning convergence of the value and policy, solution optimality, and stability of system dynamics. Additionally, we show variance reduction in policy gradient estimation. PAAC performance is systematically and quantitatively evaluated in this study using DeepMind Control Suite (DMC). Results show that PAAC leads to significant performance improvement measured by total cost, learning variance, robustness, learning speed and success rate. As PAAC can be piggybacked onto general policy gradient learning frameworks, we select well-known methods such as direct heuristic dynamic programming (dHDP), deep deterministic policy gradient (DDPG) and their variants to demonstrate the effectiveness of PAAC. Consequently we provide a unified view on these related policy gradient algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 447,631 |
2309.02080 | Optimization tools for Twin-in-the-Loop vehicle control design: analysis
and yaw-rate tracking case study | Given the urgent need of simplifying the end-of-line tuning of complex vehicle dynamics controllers, the Twin-in-the-Loop Control (TiL-C) approach was recently proposed in the automotive field. In TiL-C, a digital twin is run on-board to compute a nominal control action in run-time and an additional block C_delta is used to compensate for the mismatch between the simulator and the real vehicle. As the digital twin is assumed to be the best replica available of the real plant, the key issue in TiL-C becomes the tuning of the compensator, which must be performed relying on data only. In this paper, we investigate the use of different black-box optimization techniques for the calibration of C_delta. More specifically, we compare the originally proposed Bayesian Optimization (BO) approach with the recently developed Set Membership Global Optimization (SMGO) and Virtual Reference Feedback Tuning (VRFT), a one-shot direct data-driven design method. The analysis will be carried out within a professional multibody simulation environment on a novel TiL-C application case study -- the yaw-rate tracking problem -- so as to further prove the TiL-C effctiveness on a challenging problem. Simulations will show that the VRFT approach is capable of providing a well tuned controller after a single iteration, while 10 to 15 iterations are necessary for refining it with global optimizers. Also, SMGO is shown to significantly reduce the computational effort required by BO. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 389,923 |
1902.00563 | Forecasting Intra-Hour Imbalances in Electric Power Systems | Keeping the electricity production in balance with the actual demand is becoming a difficult and expensive task in spite of an involvement of experienced human operators. This is due to the increasing complexity of the electric power grid system with the intermittent renewable production as one of the contributors. A beforehand information about an occurring imbalance can help the transmission system operator to adjust the production plans, and thus ensure a high security of supply by reducing the use of costly balancing reserves, and consequently reduce undesirable fluctuations of the 50 Hz power system frequency. In this paper, we introduce the relatively new problem of an intra-hour imbalance forecasting for the transmission system operator (TSO). We focus on the use case of the Norwegian TSO, Statnett. We present a complementary imbalance forecasting tool that is able to support the TSO in determining the trend of future imbalances, and show the potential to proactively alleviate imbalances with a higher accuracy compared to the contemporary solution. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 120,430 |
1602.06461 | Undermining and Strengthening Social Networks through Network
Modification | Social networks have well documented effects at the individual and aggregate level. Consequently it is often useful to understand how an attempt to influence a network will change its structure and consequently achieve other goals. We develop a framework for network modification that allows for arbitrary objective functions, types of modification (e.g. edge weight addition, edge weight removal, node removal, and covariate value change), and recovery mechanisms (i.e. how a network responds to interventions). The framework outlined in this paper helps both to situate the existing work on network interventions but also opens up many new possibilities for intervening in networks. In particular use two case studies to highlight the potential impact of empirically calibrating the objective function and network recovery mechanisms as well as showing how interventions beyond node removal can be optimised. First, we simulate an optimal removal of nodes from the Noordin terrorist network in order to reduce the expected number of attacks (based on empirically predicting the terrorist collaboration network from multiple types of network ties). Second, we simulate optimally strengthening ties within entrepreneurial ecosystems in six developing countries. In both cases we estimate ERGM models to simulate how a network will endogenously evolve after intervention. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 52,368 |
2002.00734 | Analysis of the quotation corpus of the Russian Wiktionary | The quantitative evaluation of quotations in the Russian Wiktionary was performed using the developed Wiktionary parser. It was found that the number of quotations in the dictionary is growing fast (51.5 thousands in 2011, 62 thousands in 2012). These quotations were extracted and saved in the relational database of a machine-readable dictionary. For this database, tables related to the quotations were designed. A histogram of distribution of quotations of literary works written in different years was built. It was made an attempt to explain the characteristics of the histogram by associating it with the years of the most popular and cited (in the Russian Wiktionary) writers of the nineteenth century. It was found that more than one-third of all the quotations (the example sentences) contained in the Russian Wiktionary are taken by the editors of a Wiktionary entry from the Russian National Corpus. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 162,442 |
2111.05464 | Are Transformers More Robust Than CNNs? | Transformer emerges as a powerful tool for visual recognition. In addition to demonstrating competitive performance on a broad range of visual benchmarks, recent works also argue that Transformers are much more robust than Convolutions Neural Networks (CNNs). Nonetheless, surprisingly, we find these conclusions are drawn from unfair experimental settings, where Transformers and CNNs are compared at different scales and are applied with distinct training frameworks. In this paper, we aim to provide the first fair & in-depth comparisons between Transformers and CNNs, focusing on robustness evaluations. With our unified training setup, we first challenge the previous belief that Transformers outshine CNNs when measuring adversarial robustness. More surprisingly, we find CNNs can easily be as robust as Transformers on defending against adversarial attacks, if they properly adopt Transformers' training recipes. While regarding generalization on out-of-distribution samples, we show pre-training on (external) large-scale datasets is not a fundamental request for enabling Transformers to achieve better performance than CNNs. Moreover, our ablations suggest such stronger generalization is largely benefited by the Transformer's self-attention-like architectures per se, rather than by other training setups. We hope this work can help the community better understand and benchmark the robustness of Transformers and CNNs. The code and models are publicly available at https://github.com/ytongbai/ViTs-vs-CNNs. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 265,800 |
2211.14554 | DynaGAN: Dynamic Few-shot Adaptation of GANs to Multiple Domains | Few-shot domain adaptation to multiple domains aims to learn a complex image distribution across multiple domains from a few training images. A na\"ive solution here is to train a separate model for each domain using few-shot domain adaptation methods. Unfortunately, this approach mandates linearly-scaled computational resources both in memory and computation time and, more importantly, such separate models cannot exploit the shared knowledge between target domains. In this paper, we propose DynaGAN, a novel few-shot domain-adaptation method for multiple target domains. DynaGAN has an adaptation module, which is a hyper-network that dynamically adapts a pretrained GAN model into the multiple target domains. Hence, we can fully exploit the shared knowledge across target domains and avoid the linearly-scaled computational requirements. As it is still computationally challenging to adapt a large-size GAN model, we design our adaptation module light-weight using the rank-1 tensor decomposition. Lastly, we propose a contrastive-adaptation loss suitable for multi-domain few-shot adaptation. We validate the effectiveness of our method through extensive qualitative and quantitative evaluations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 332,892 |
2304.10767 | How good are variational autoencoders at transfer learning? | Variational autoencoders (VAEs) are used for transfer learning across various research domains such as music generation or medical image analysis. However, there is no principled way to assess before transfer which components to retrain or whether transfer learning is likely to help on a target task. We propose to explore this question through the lens of representational similarity. Specifically, using Centred Kernel Alignment (CKA) to evaluate the similarity of VAEs trained on different datasets, we show that encoders' representations are generic but decoders' specific. Based on these insights, we discuss the implications for selecting which components of a VAE to retrain and propose a method to visually assess whether transfer learning is likely to help on classification tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 359,551 |
2412.18328 | On Codes over Eisenstein Integers | We propose constructions of codes over quotient rings of Eisenstein integers equipped with the Euclidean, square Euclidean, and hexagonal distances as a generalization of codes over Eisenstein integer fields. By set partitioning, we effectively divide the ring of Eisenstein integers into equal-sized subsets for distinct encoding. Unlike in Eisenstein integer fields of prime size, where partitioning is not feasible due to structural limitations, we partition the quotient rings into additive subgroups in such a way that the minimum square Euclidean and hexagonal distances of each subgroup are strictly larger than in the original set. This technique facilitates multilevel coding and enhances signal constellation efficiency. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 520,376 |
2212.06041 | Sentiment Analysis of Persian Language: Review of Algorithms, Approaches
and Datasets | Sentiment analysis aims to extract people's emotions and opinion from their comments on the web. It widely used in businesses to detect sentiment in social data, gauge brand reputation, and understand customers. Most of articles in this area have concentrated on the English language whereas there are limited resources for Persian language. In this review paper, recent published articles between 2018 and 2022 in sentiment analysis in Persian Language have been collected and their methods, approach and dataset will be explained and analyzed. Almost all the methods used to solve sentiment analysis are machine learning and deep learning. The purpose of this paper is to examine 40 different approach sentiment analysis in the Persian Language, analysis datasets along with the accuracy of the algorithms applied to them and also review strengths and weaknesses of each. Among all the methods, transformers such as BERT and RNN Neural Networks such as LSTM and Bi-LSTM have achieved higher accuracy in the sentiment analysis. In addition to the methods and approaches, the datasets reviewed are listed between 2018 and 2022 and information about each dataset and its details are provided. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 335,989 |
2102.05348 | Regional Attention with Architecture-Rebuilt 3D Network for RGB-D
Gesture Recognition | Human gesture recognition has drawn much attention in the area of computer vision. However, the performance of gesture recognition is always influenced by some gesture-irrelevant factors like the background and the clothes of performers. Therefore, focusing on the regions of hand/arm is important to the gesture recognition. Meanwhile, a more adaptive architecture-searched network structure can also perform better than the block-fixed ones like Resnet since it increases the diversity of features in different stages of the network better. In this paper, we propose a regional attention with architecture-rebuilt 3D network (RAAR3DNet) for gesture recognition. We replace the fixed Inception modules with the automatically rebuilt structure through the network via Neural Architecture Search (NAS), owing to the different shape and representation ability of features in the early, middle, and late stage of the network. It enables the network to capture different levels of feature representations at different layers more adaptively. Meanwhile, we also design a stackable regional attention module called dynamic-static Attention (DSA), which derives a Gaussian guidance heatmap and dynamic motion map to highlight the hand/arm regions and the motion information in the spatial and temporal domains, respectively. Extensive experiments on two recent large-scale RGB-D gesture datasets validate the effectiveness of the proposed method and show it outperforms state-of-the-art methods. The codes of our method are available at: https://github.com/zhoubenjia/RAAR3DNet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 219,410 |
2202.03208 | Elastic waveform inversion in the frequency domain for an application in
mechanized tunneling | The excavation process in mechanized tunneling can be improved by reconnaissance of the geology ahead. A nondestructive exploration can be achieved in means of seismic imaging. A full waveform inversion approach, which works in the frequency domain, is investigated for the application in tunneling. The approach tries to minimize the difference of seismic records from field observations and from a discretized ground model by changing the ground properties. The final ground model might be a representation of the geology. The used elastic wave modeling approach is described as well as the application of convolutional perfectly matched layers. The proposed inversion scheme uses the discrete adjoint gradient method, a multi-scale approach as well as the L-BFGS method. Numerical parameters are identified as well as a validation of the forward wave modeling approach is performed in advance to the inversion of every example. Two-dimensional blind tests with two different ground scenarios and with two different source and receiver station configurations are performed and analyzed, where only the seismic records, the source functions and the ambient ground properties are provided. Finally, an inversion for a three-dimensional tunnel model is performed and analyzed for three different source and receiver station configurations. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 279,120 |
2401.14304 | Constraint-Aware Mesh Refinement Method by Reachability Set Envelope of
Curvature Bounded Paths | This paper presents an enhanced direct-method-based approach for the real-time solution of optimal control problems to handle path constraints, such as obstacles. The principal contributions of this work are twofold: first, the existing methods for constructing reachability sets in the literature are extended to derive the envelope of these sets, which determines the region swept by all feasible trajectories between adjacent sample points. Second, we propose a novel method to guarantee constraint violation-free between discrete states in two dimensions through mesh refinement approach. To illustrate the effectiveness of the proposed methodology, numerical simulations are conducted on real-time path planning for fixed-wing unmanned aerial vehicles. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 424,042 |
2301.04751 | Artificial Intelligence Generated Coins for Size Comparison | Authors of scientific articles use coins in photographs as a size reference for objects. For this purpose, coins are placed next to objects when taking the photo. In this letter we propose a novel method that uses artificial intelligence (AI) generated images of coins to provide a size reference in photos. The newest generation is able to quickly generate realistic high-quality images from textual descriptions. With the proposed method no physical coin is required while taking photos. Coins can be added to photos that contain none. Furthermore, we show how the coin motif can be matched to the object. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 340,157 |
2304.11293 | Nonverbal Cues in Human-Robot Interaction: A Communication Studies
Perspective | Communication between people is characterized by a broad range of nonverbal cues. Transferring these cues into the design of robots and other artificial agents that interact with people may foster more natural, inviting, and accessible experiences. In this position paper, we offer a series of definitive nonverbal codes for human-robot interaction (HRI) that address the five human sensory systems (visual, auditory, haptic, olfactory, gustatory) drawn from the field of communication studies. We discuss how these codes can be translated into design patterns for HRI using a curated sample of the communication studies and HRI literatures. As nonverbal codes are an essential mode in human communication, we argue that integrating robotic nonverbal codes in HRI will afford robots a feeling of "aliveness" or "social agency" that would otherwise be missing. We end with suggestions for research directions to stimulate work on nonverbal communication within the field of HRI and improve communication between human and robots. | true | false | false | false | true | false | false | true | false | false | false | false | false | true | false | false | false | false | 359,755 |
2205.11103 | Proceedings Seventeenth International Workshop on the ACL2 Theorem
Prover and its Applications | This volume contains a selection of papers presented at the 17th International Workshop on the ACL2 Theorem Prover and its Applications (ACL2 2022). The workshops are the premier technical forum for presenting research and experiences related to ACL2. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 297,996 |
2212.03869 | Pre-Training With Scientific Text Improves Educational Question
Generation | With the boom of digital educational materials and scalable e-learning systems, the potential for realising AI-assisted personalised learning has skyrocketed. In this landscape, the automatic generation of educational questions will play a key role, enabling scalable self-assessment when a global population is manoeuvring their personalised learning journeys. We develop EduQG, a novel educational question generation model built by adapting a large language model. Our initial experiments demonstrate that EduQG can produce superior educational questions by pre-training on scientific text. | false | false | false | false | true | true | true | false | true | false | false | false | false | true | false | false | false | false | 335,263 |
2409.11363 | CORE-Bench: Fostering the Credibility of Published Research Through a
Computational Reproducibility Agent Benchmark | AI agents have the potential to aid users on a variety of consequential tasks, including conducting scientific research. To spur the development of useful agents, we need benchmarks that are challenging, but more crucially, directly correspond to real-world tasks of interest. This paper introduces such a benchmark, designed to measure the accuracy of AI agents in tackling a crucial yet surprisingly challenging aspect of scientific research: computational reproducibility. This task, fundamental to the scientific process, involves reproducing the results of a study using the provided code and data. We introduce CORE-Bench (Computational Reproducibility Agent Benchmark), a benchmark consisting of 270 tasks based on 90 scientific papers across three disciplines (computer science, social science, and medicine). Tasks in CORE-Bench consist of three difficulty levels and include both language-only and vision-language tasks. We provide an evaluation system to measure the accuracy of agents in a fast and parallelizable way, saving days of evaluation time for each run compared to a sequential implementation. We evaluated two baseline agents: the general-purpose AutoGPT and a task-specific agent called CORE-Agent. We tested both variants using two underlying language models: GPT-4o and GPT-4o-mini. The best agent achieved an accuracy of 21% on the hardest task, showing the vast scope for improvement in automating routine scientific tasks. Having agents that can reproduce existing work is a necessary step towards building agents that can conduct novel research and could verify and improve the performance of other research agents. We hope that CORE-Bench can improve the state of reproducibility and spur the development of future research agents. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | true | false | false | false | 489,122 |
2210.07572 | Cross-Scale Context Extracted Hashing for Fine-Grained Image Binary
Encoding | Deep hashing has been widely applied to large-scale image retrieval tasks owing to efficient computation and low storage cost by encoding high-dimensional image data into binary codes. Since binary codes do not contain as much information as float features, the essence of binary encoding is preserving the main context to guarantee retrieval quality. However, the existing hashing methods have great limitations on suppressing redundant background information and accurately encoding from Euclidean space to Hamming space by a simple sign function. In order to solve these problems, a Cross-Scale Context Extracted Hashing Network (CSCE-Net) is proposed in this paper. Firstly, we design a two-branch framework to capture fine-grained local information while maintaining high-level global semantic information. Besides, Attention guided Information Extraction module (AIE) is introduced between two branches, which suppresses areas of low context information cooperated with global sliding windows. Unlike previous methods, our CSCE-Net learns a content-related Dynamic Sign Function (DSF) to replace the original simple sign function. Therefore, the proposed CSCE-Net is context-sensitive and able to perform well on accurate image binary encoding. We further demonstrate that our CSCE-Net is superior to the existing hashing methods, which improves retrieval performance on standard benchmarks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 323,776 |
2105.01798 | Pervasive AI for IoT applications: A Survey on Resource-efficient
Distributed Artificial Intelligence | Artificial intelligence (AI) has witnessed a substantial breakthrough in a variety of Internet of Things (IoT) applications and services, spanning from recommendation systems to robotics control and military surveillance. This is driven by the easier access to sensory data and the enormous scale of pervasive/ubiquitous devices that generate zettabytes (ZB) of real-time data streams. Designing accurate models using such data streams, to predict future insights and revolutionize the decision-taking process, inaugurates pervasive systems as a worthy paradigm for a better quality-of-life. The confluence of pervasive computing and artificial intelligence, Pervasive AI, expanded the role of ubiquitous IoT systems from mainly data collection to executing distributed computations with a promising alternative to centralized learning, presenting various challenges. In this context, a wise cooperation and resource scheduling should be envisaged among IoT devices (e.g., smartphones, smart vehicles) and infrastructure (e.g. edge nodes, and base stations) to avoid communication and computation overheads and ensure maximum performance. In this paper, we conduct a comprehensive survey of the recent techniques developed to overcome these resource challenges in pervasive AI systems. Specifically, we first present an overview of the pervasive computing, its architecture, and its intersection with artificial intelligence. We then review the background, applications and performance metrics of AI, particularly Deep Learning (DL) and online learning, running in a ubiquitous system. Next, we provide a deep literature review of communication-efficient techniques, from both algorithmic and system perspectives, of distributed inference, training and online learning tasks across the combination of IoT devices, edge devices and cloud servers. Finally, we discuss our future vision and research challenges. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 233,628 |
2106.08008 | Towards Long-term Non-invasive Monitoring for Epilepsy via Wearable EEG
Devices | We present the implementation of seizure detection algorithms based on a minimal number of EEG channels on a parallel ultra-low-power embedded platform. The analyses are based on the CHB-MIT dataset, and include explorations of different classification approaches (Support Vector Machines, Random Forest, Extra Trees, AdaBoost) and different pre/post-processing techniques to maximize sensitivity while guaranteeing no false alarms. We analyze global and subject-specific approaches, considering all 23-electrodes or only 4 temporal channels. For 8s window size and subject-specific approach, we report zero false positives and 100% sensitivity. These algorithms are parallelized and optimized for a parallel ultra-low power (PULP) platform, enabling 300h of continuous monitoring on a 300 mAh battery, in a wearable form factor and power budget. These results pave the way for the implementation of affordable, wearable, long-term epilepsy monitoring solutions with low false-positive rates and high sensitivity, meeting both patient and caregiver requirements. | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 241,152 |
2501.19308 | Ontological analysis of proactive life event services | Life event service is a direct digital public service provided jointly by several governmental institutions so that a person can fulfill all the obligations and use all the rights that arise due to a particular event or situation in personal life. Life event service consolidates several public services related to the same life event into one service for the service consumer. This paper presents an ontological analysis of life event services, which is based on the works by Guarino, Guizzardi, Nardi, Wagner, and others. The purpose of the ontological analysis is to understand the meanings of life event, proactive public service based on life event, and other related notions. This kind of ontological analysis is crucial because for implementing the hardware and software architectures of e-government and digital public services, it is essential to agree upon the precise meanings of the underlying terms. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 529,105 |
2307.11565 | Adversarial Feature Map Pruning for Backdoor | Deep neural networks have been widely used in many critical applications, such as autonomous vehicles and medical diagnosis. However, their security is threatened by backdoor attacks, which are achieved by adding artificial patterns to specific training data. Existing defense strategies primarily focus on using reverse engineering to reproduce the backdoor trigger generated by attackers and subsequently repair the DNN model by adding the trigger into inputs and fine-tuning the model with ground-truth labels. However, once the trigger generated by the attackers is complex and invisible, the defender cannot reproduce the trigger successfully then the DNN model will not be repaired, as the trigger is not effectively removed. In this work, we propose Adversarial Feature Map Pruning for Backdoor (FMP) to mitigate backdoor from the DNN. Unlike existing defense strategies, which focus on reproducing backdoor triggers, FMP attempts to prune backdoor feature maps, which are trained to extract backdoor information from inputs. After pruning these backdoor feature maps, FMP will fine-tune the model with a secure subset of training data. Our experiments demonstrate that, compared to existing defense strategies, FMP can effectively reduce the Attack Success Rate (ASR) even against the most complex and invisible attack triggers (e.g., FMP decreases the ASR to 2.86\% in CIFAR10, which is 19.2\% to 65.41\% lower than baselines). Second, unlike conventional defense methods that tend to exhibit low robust accuracy (that is, the accuracy of the model on poisoned data), FMP achieves a higher RA, indicating its superiority in maintaining model performance while mitigating the effects of backdoor attacks (e.g., FMP obtains 87.40\% RA in CIFAR10). Our code is publicly available at: https://github.com/retsuh-bqw/FMP. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 380,957 |
1812.05248 | Shortcut Matrix Product States and its applications | Matrix Product States (MPS), also known as Tensor Train (TT) decomposition in mathematics, has been proposed originally for describing an (especially one-dimensional) quantum system, and recently has found applications in various applications such as compressing high-dimensional data, supervised kernel linear classifier, and unsupervised generative modeling. However, when applied to systems which are not defined on one-dimensional lattices, a serious drawback of the MPS is the exponential decay of the correlations, which limits its power in capturing long-range dependences among variables in the system. To alleviate this problem, we propose to introduce long-range interactions, which act as shortcuts, to MPS, resulting in a new model \textit{ Shortcut Matrix Product States} (SMPS). When chosen properly, the shortcuts can decrease significantly the correlation length of the MPS, while preserving the computational efficiency. We develop efficient training methods of SMPS for various tasks, establish some of their mathematical properties, and show how to find a good location to add shortcuts. Finally, using extensive numerical experiments we evaluate its performance in a variety of applications, including function fitting, partition function calculation of $2-$d Ising model, and unsupervised generative modeling of handwritten digits, to illustrate its advantages over vanilla matrix product states. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 116,377 |
2403.17554 | Robust Stability for Multiagent Systems with Spatio-Temporally
Correlated Packet Loss | A problem with considering correlations in the analysis of multiagent system with stochastic packet loss is that they induce dependencies between agents that are otherwise decoupled, preventing the application of decomposition methods required for efficient evaluation. To circumvent that issue, this paper is proposing an approach based on analysing sets of networks with independent communication links, only considering the correlations in an implicit fashion. Combining ideas from the robust stabilization of Markov jump linear systems with recently proposed techniques for analysing packet loss in multiagent systems, we obtain a linear matrix inequality based stability condition which is independent of the number of agents. The main result is that the set of stabilized probability distributions has non-empty interior such that small correlations cannot lead to instability, even though only distributions of independent links were analysed. Moreover, two examples are provided to demonstrate the applicability of the results to practically relevant scenarios. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 441,520 |
2312.08520 | Revisiting Recommendation Loss Functions through Contrastive Learning
(Technical Report) | Inspired by the success of contrastive learning, we systematically examine recommendation losses, including listwise (softmax), pairwise (BPR), and pointwise (MSE and CCL) losses. In this endeavor, we introduce InfoNCE+, an optimized generalization of InfoNCE with balance coefficients, and highlight its performance advantages, particularly when aligned with our new decoupled contrastive loss, MINE+. We also leverage debiased InfoNCE to debias pointwise recommendation loss (CCL) as Debiased CCL. Interestingly, our analysis reveals that linear models like iALS and EASE are inherently debiased. Empirical results demonstrates the effectiveness of MINE+ and Debiased-CCL. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 415,325 |
1903.03757 | Hierarchy Denoising Recursive Autoencoders for 3D Scene Layout
Prediction | Indoor scenes exhibit rich hierarchical structure in 3D object layouts. Many tasks in 3D scene understanding can benefit from reasoning jointly about the hierarchical context of a scene, and the identities of objects. We present a variational denoising recursive autoencoder (VDRAE) that generates and iteratively refines a hierarchical representation of 3D object layouts, interleaving bottom-up encoding for context aggregation and top-down decoding for propagation. We train our VDRAE on large-scale 3D scene datasets to predict both instance-level segmentations and a 3D object detections from an over-segmentation of an input point cloud. We show that our VDRAE improves object detection performance on real-world 3D point cloud datasets compared to baselines from prior work. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 123,810 |
2306.12679 | Constructing Colloquial Dataset for Persian Sentiment Analysis of Social
Microblogs | Introduction: Microblogging websites have massed rich data sources for sentiment analysis and opinion mining. In this regard, sentiment classification has frequently proven inefficient because microblog posts typically lack syntactically consistent terms and representatives since users on these social networks do not like to write lengthy statements. Also, there are some limitations to low-resource languages. The Persian language has exceptional characteristics and demands unique annotated data and models for the sentiment analysis task, which are distinctive from text features within the English dialect. Method: This paper first constructs a user opinion dataset called ITRC-Opinion in a collaborative environment and insource way. Our dataset contains 60,000 informal and colloquial Persian texts from social microblogs such as Twitter and Instagram. Second, this study proposes a new architecture based on the convolutional neural network (CNN) model for more effective sentiment analysis of colloquial text in social microblog posts. The constructed datasets are used to evaluate the presented architecture. Furthermore, some models, such as LSTM, CNN-RNN, BiLSTM, and BiGRU with different word embeddings, including Fasttext, Glove, and Word2vec, investigated our dataset and evaluated the results. Results: The results demonstrate the benefit of our dataset and the proposed model (72% accuracy), displaying meaningful improvement in sentiment classification performance. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 375,027 |
2501.16106 | Towards Explainable Multimodal Depression Recognition for Clinical
Interviews | Recently, multimodal depression recognition for clinical interviews (MDRC) has recently attracted considerable attention. Existing MDRC studies mainly focus on improving task performance and have achieved significant development. However, for clinical applications, model transparency is critical, and previous works ignore the interpretability of decision-making processes. To address this issue, we propose an Explainable Multimodal Depression Recognition for Clinical Interviews (EMDRC) task, which aims to provide evidence for depression recognition by summarizing symptoms and uncovering underlying causes. Given an interviewer-participant interaction scenario, the goal of EMDRC is to structured summarize participant's symptoms based on the eight-item Patient Health Questionnaire depression scale (PHQ-8), and predict their depression severity. To tackle the EMDRC task, we construct a new dataset based on an existing MDRC dataset. Moreover, we utilize the PHQ-8 and propose a PHQ-aware multimodal multi-task learning framework, which captures the utterance-level symptom-related semantic information to help generate dialogue-level summary. Experiment results on our annotated dataset demonstrate the superiority of our proposed methods over baseline systems on the EMDRC task. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 527,819 |
2412.13553 | Combining Aggregated Attention and Transformer Architecture for Accurate
and Efficient Performance of Spiking Neural Networks | Spiking Neural Networks have attracted significant attention in recent years due to their distinctive low-power characteristics. Meanwhile, Transformer models, known for their powerful self-attention mechanisms and parallel processing capabilities, have demonstrated exceptional performance across various domains, including natural language processing and computer vision. Despite the significant advantages of both SNNs and Transformers, directly combining the low-power benefits of SNNs with the high performance of Transformers remains challenging. Specifically, while the sparse computing mode of SNNs contributes to reduced energy consumption, traditional attention mechanisms depend on dense matrix computations and complex softmax operations. This reliance poses significant challenges for effective execution in low-power scenarios. Given the tremendous success of Transformers in deep learning, it is a necessary step to explore the integration of SNNs and Transformers to harness the strengths of both. In this paper, we propose a novel model architecture, Spike Aggregation Transformer (SAFormer), that integrates the low-power characteristics of SNNs with the high-performance advantages of Transformer models. The core contribution of SAFormer lies in the design of the Spike Aggregated Self-Attention (SASA) mechanism, which significantly simplifies the computation process by calculating attention weights using only the spike matrices query and key, thereby effectively reducing energy consumption. Additionally, we introduce a Depthwise Convolution Module (DWC) to enhance the feature extraction capabilities, further improving overall accuracy. We evaluated and demonstrated that SAFormer outperforms state-of-the-art SNNs in both accuracy and energy consumption, highlighting its significant advantages in low-power and high-performance computing. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 518,340 |
1706.08189 | Robust Video-Based Eye Tracking Using Recursive Estimation of Pupil
Characteristics | Video-based eye tracking is a valuable technique in various research fields. Numerous open-source eye tracking algorithms have been developed in recent years, primarily designed for general application with many different camera types. These algorithms do not, however, capitalize on the high frame rate of eye tracking cameras often employed in psychophysical studies. We present a pupil detection method that utilizes this high-speed property to obtain reliable predictions through recursive estimation about certain pupil characteristics in successive camera frames. These predictions are subsequently used to carry out novel image segmentation and classification routines to improve pupil detection performance. Based on results from hand-labelled eye images, our approach was found to have a greater detection rate, accuracy and speed compared to other recently published open-source pupil detection algorithms. The program's source code, together with a graphical user interface, can be downloaded at https://github.com/tbrouns/eyestalker | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 75,958 |
2407.08020 | Interactive Segmentation Model for Placenta Segmentation from 3D
Ultrasound images | Placenta volume measurement from 3D ultrasound images is critical for predicting pregnancy outcomes, and manual annotation is the gold standard. However, such manual annotation is expensive and time-consuming. Automated segmentation algorithms can often successfully segment the placenta, but these methods may not consistently produce robust segmentations suitable for practical use. Recently, inspired by the Segment Anything Model (SAM), deep learning-based interactive segmentation models have been widely applied in the medical imaging domain. These models produce a segmentation from visual prompts provided to indicate the target region, which may offer a feasible solution for practical use. However, none of these models are specifically designed for interactively segmenting 3D ultrasound images, which remain challenging due to the inherent noise of this modality. In this paper, we evaluate publicly available state-of-the-art 3D interactive segmentation models in contrast to a human-in-the-loop approach for the placenta segmentation task. The Dice score, normalized surface Dice, averaged symmetric surface distance, and 95-percent Hausdorff distance are used as evaluation metrics. We consider a Dice score of 0.95 a successful segmentation. Our results indicate that the human-in-the-loop segmentation model reaches this standard. Moreover, we assess the efficiency of the human-in-the-loop model as a function of the amount of prompts. Our results demonstrate that the human-in-the-loop model is both effective and efficient for interactive placenta segmentation. The code is available at \url{https://github.com/MedICL-VU/PRISM-placenta}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 471,973 |
2401.16795 | Performance Insights-based AI-driven Football Transfer Fee Prediction | We developed an artificial intelligence approach to predict the transfer fee of a football player. This model can help clubs make better decisions about which players to buy and sell, which can lead to improved performance and increased club budgets. Having collected data on player performance, transfer fees, and other factors that might affect a player's value, we then used this data to train a machine learning model that can accurately predict a player's impact on the game. We further passed the obtained results as one of the features to the predictor of transfer fees. The model can help clubs identify players who are undervalued and who could be sold for a profit. It can also help clubs avoid overpaying for players. We believe that our model can be a valuable tool for football clubs. It can help them make better decisions about player recruitment and transfers. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 424,988 |
1502.07123 | Degrees-of-Freedom of the K-User MISO Interference Channel with Delayed
Local CSIT | This paper considers a K-user Multiple-Input-Single-Output (MISO) Interference Channel (IC), where the channel state information obtained by the transmitters (CSIT) is perfect, but completely outdated. A Retrospective Interference Alignment (RIA) using such delayed CSIT was proposed by the authors of [1] for the MISO Broadcast Channel (BC), but the extension to the MISO IC is a non-trivial step as each transmitter only has the message intended for the corresponding user. Recently, [7] focused on a Single-Input-Single-Output (SISO) IC and solved such bottleneck by inventing a distributed higher order symbol generation. Our main work is to extend [7] to the MISO case by integrating some features of the scheme proposed in [1]. The achieved sum Degrees-of-Freedom (DoF) performance is asymptotically given by 64/15 when $K\to\infty$, outperforming all the previously known results. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 40,550 |
2406.09979 | HIRO: Hierarchical Information Retrieval Optimization | Retrieval-Augmented Generation (RAG) has revolutionized natural language processing by dynamically integrating external knowledge into Large Language Models (LLMs), addressing their limitation of static training datasets. Recent implementations of RAG leverage hierarchical data structures, which organize documents at various levels of summarization and information density. This complexity, however, can cause LLMs to "choke" on information overload, necessitating more sophisticated querying mechanisms. In this context, we introduce Hierarchical Information Retrieval Optimization (HIRO), a novel querying approach that employs a Depth-First Search (DFS)-based recursive similarity score calculation and branch pruning. This method uniquely minimizes the context delivered to the LLM without informational loss, effectively managing the challenge of excessive data. HIRO's refined approach is validated by a 10.85% improvement in performance on the NarrativeQA dataset. | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | false | false | 464,172 |
2104.13366 | Unsupervised 3D Shape Completion through GAN Inversion | Most 3D shape completion approaches rely heavily on partial-complete shape pairs and learn in a fully supervised manner. Despite their impressive performances on in-domain data, when generalizing to partial shapes in other forms or real-world partial scans, they often obtain unsatisfactory results due to domain gaps. In contrast to previous fully supervised approaches, in this paper we present ShapeInversion, which introduces Generative Adversarial Network (GAN) inversion to shape completion for the first time. ShapeInversion uses a GAN pre-trained on complete shapes by searching for a latent code that gives a complete shape that best reconstructs the given partial input. In this way, ShapeInversion no longer needs paired training data, and is capable of incorporating the rich prior captured in a well-trained generative model. On the ShapeNet benchmark, the proposed ShapeInversion outperforms the SOTA unsupervised method, and is comparable with supervised methods that are learned using paired data. It also demonstrates remarkable generalization ability, giving robust results for real-world scans and partial inputs of various forms and incompleteness levels. Importantly, ShapeInversion naturally enables a series of additional abilities thanks to the involvement of a pre-trained GAN, such as producing multiple valid complete shapes for an ambiguous partial input, as well as shape manipulation and interpolation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 232,481 |
2403.12294 | A Comparative Investigation of Compositional Syntax and Semantics in
DALL-E 2 | In this study we compared how well DALL-E 2 visually represented the meaning of linguistic prompts also given to young children in comprehension tests. Sentences representing fundamental components of grammatical knowledge were selected from assessment tests used with several hundred English-speaking children aged 2-7 years for whom we had collected original item-level data. DALL-E 2 was given these prompts five times to generate 20 cartoons per item, for 9 adult judges to score. Results revealed no conditions in which DALL-E 2-generated images that matched the semantic accuracy of children, even at the youngest age (2 years). DALL-E 2 failed to assign the appropriate roles in reversible forms; it failed on negation despite an easier contrastive prompt than the children received; it often assigned the adjective to the wrong noun; it ignored implicit agents in passives. This work points to a clear absence of compositional sentence representations for DALL-E 2. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 439,104 |
1106.1802 | Fusions of Description Logics and Abstract Description Systems | Fusions are a simple way of combining logics. For normal modal logics, fusions have been investigated in detail. In particular, it is known that, under certain conditions, decidability transfers from the component logics to their fusion. Though description logics are closely related to modal logics, they are not necessarily normal. In addition, ABox reasoning in description logics is not covered by the results from modal logics. In this paper, we extend the decidability transfer results from normal modal logics to a large class of description logics. To cover different description logics in a uniform way, we introduce abstract description systems, which can be seen as a common generalization of description and modal logics, and show the transfer results in this general setting. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 10,784 |
1612.08837 | Codes in the Space of Multisets---Coding for Permutation Channels with
Impairments | Motivated by communication channels in which the transmitted sequences are subject to random permutations, as well as by certain DNA storage systems, we study the error control problem in settings where the information is stored/transmitted in the form of multisets of symbols from a given finite alphabet. A general channel model is assumed in which the transmitted multisets are potentially impaired by insertions, deletions, substitutions, and erasures of symbols. Several constructions of error-correcting codes for this channel are described, and bounds on the size of optimal codes correcting any given number of errors derived. The construction based on the notion of Sidon sets in finite Abelian groups is shown to be optimal, in the sense of the asymptotic scaling of code redundancy, for any "error radius" and any alphabet size. It is also shown to be optimal in the stronger sense of maximal code cardinality in various cases. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 66,121 |
2105.15162 | Automatic audiovisual synchronisation for ultrasound tongue imaging | Ultrasound tongue imaging is used to visualise the intra-oral articulators during speech production. It is utilised in a range of applications, including speech and language therapy and phonetics research. Ultrasound and speech audio are recorded simultaneously, and in order to correctly use this data, the two modalities should be correctly synchronised. Synchronisation is achieved using specialised hardware at recording time, but this approach can fail in practice resulting in data of limited usability. In this paper, we address the problem of automatically synchronising ultrasound and audio after data collection. We first investigate the tolerance of expert ultrasound users to synchronisation errors in order to find the thresholds for error detection. We use these thresholds to define accuracy scoring boundaries for evaluating our system. We then describe our approach for automatic synchronisation, which is driven by a self-supervised neural network, exploiting the correlation between the two signals to synchronise them. We train our model on data from multiple domains with different speaker characteristics, different equipment, and different recording environments, and achieve an accuracy >92.4% on held-out in-domain data. Finally, we introduce a novel resource, the Cleft dataset, which we gathered with a new clinical subgroup and for which hardware synchronisation proved unreliable. We apply our model to this out-of-domain data, and evaluate its performance subjectively with expert users. Results show that users prefer our model's output over the original hardware output 79.3% of the time. Our results demonstrate the strength of our approach and its ability to generalise to data from new domains. | false | false | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 237,933 |
2010.13106 | Scribble-based Weakly Supervised Deep Learning for Road Surface
Extraction from Remote Sensing Images | Road surface extraction from remote sensing images using deep learning methods has achieved good performance, while most of the existing methods are based on fully supervised learning, which requires a large amount of training data with laborious per-pixel annotation. In this paper, we propose a scribble-based weakly supervised road surface extraction method named ScRoadExtractor, which learns from easily accessible scribbles such as centerlines instead of densely annotated road surface ground-truths. To propagate semantic information from sparse scribbles to unlabeled pixels, we introduce a road label propagation algorithm which considers both the buffer-based properties of road networks and the color and spatial information of super-pixels. The proposal masks generated from the road label propagation algorithm are utilized to train a dual-branch encoder-decoder network we designed, which consists of a semantic segmentation branch and an auxiliary boundary detection branch. We perform experiments on three diverse road datasets that are comprised of highresolution remote sensing satellite and aerial images across the world. The results demonstrate that ScRoadExtractor exceed the classic scribble-supervised segmentation method by 20% for the intersection over union (IoU) indicator and outperform the state-of-the-art scribble-based weakly supervised methods at least 4%. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 203,005 |
2202.00795 | Disaster Tweets Classification using BERT-Based Language Model | Social networking services have became an important communication channel in time of emergency. The aim of this study is to create a machine learning language model that is able to investigate if a person or area was in danger or not. The ubiquitousness of smartphones enables people to announce an emergency they are observing in real-time. Because of this, more agencies are interested in programmatically monitoring Twitter (i.e. disaster relief organizations and news agencies). Design a language model that is able to understand and acknowledge when a disaster is happening based on the social network posts will become more and more necessary over time. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 278,261 |
2410.09335 | Rethinking Data Selection at Scale: Random Selection is Almost All You
Need | Supervised fine-tuning (SFT) is crucial for aligning Large Language Models (LLMs) with human instructions. The primary goal during SFT is to select a small yet representative subset of training data from the larger pool, such that fine-tuning with this subset achieves results comparable to or even exceeding those obtained using the entire dataset. However, most existing data selection techniques are designed for small-scale data pools, which fail to meet the demands of real-world SFT scenarios. In this paper, we replicated several self-scoring methods those that do not rely on external model assistance on two million scale datasets, and found that nearly all methods struggled to significantly outperform random selection when dealing with such large-scale data pools. Moreover, our comparisons suggest that, during SFT, diversity in data selection is more critical than simply focusing on high quality data. We also analyzed the limitations of several current approaches, explaining why they perform poorly on large-scale datasets and why they are unsuitable for such contexts. Finally, we found that filtering data by token length offers a stable and efficient method for improving results. This approach, particularly when training on long text data, proves highly beneficial for relatively weaker base models, such as Llama3. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 497,541 |
2307.14294 | Unraveling the Complexity of Splitting Sequential Data: Tackling
Challenges in Video and Time Series Analysis | Splitting of sequential data, such as videos and time series, is an essential step in various data analysis tasks, including object tracking and anomaly detection. However, splitting sequential data presents a variety of challenges that can impact the accuracy and reliability of subsequent analyses. This concept article examines the challenges associated with splitting sequential data, including data acquisition, data representation, split ratio selection, setting up quality criteria, and choosing suitable selection strategies. We explore these challenges through two real-world examples: motor test benches and particle tracking in liquids. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 381,873 |
2108.00966 | Tuning Cooperative Behavior in Games with Nonlinear Opinion Dynamics | We examine the tuning of cooperative behavior in repeated multi-agent games using an analytically tractable, continuous-time, nonlinear model of opinion dynamics. Each modeled agent updates its real-valued opinion about each available strategy in response to payoffs and other agent opinions, as observed over a network. We show how the model provides a principled and systematic means to investigate behavior of agents that select strategies using rationality and reciprocity, key features of human decision-making in social dilemmas. For two-strategy games, we use bifurcation analysis to prove conditions for the bistability of two equilibria and conditions for the first (second) equilibrium to reflect all agents favoring the first (second) strategy. We prove how model parameters, e.g., level of attention to opinions of others (reciprocity), network structure, and payoffs, influence dynamics and, notably, the size of the region of attraction to each stable equilibrium. We provide insights by examining the tuning of the bistability of mutual cooperation and mutual defection and their regions of attraction for the repeated prisoner's dilemma and the repeated multi-agent public goods game. Our results generalize to games with more strategies, heterogeneity, and additional feedback dynamics, such as those designed to elicit cooperation. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 248,884 |
2307.16192 | Publish for Public: Improving Access of Public Libraries Users to
Research Findings through Plain Language Summaries | Public libraries play a crucial role in disseminating knowledge to society. However, most of their users do not have the specialized knowledge to understand the new research findings. Providing plain language summaries (PLSs) in public libraries is a way to make the new research findings more accessible and understandable for the public. This article proposes a framework for providing PLSs as a new service in public libraries. Drawing from the literature on science and society, PLSs, and public libraries, a theoretical framework is developed. The findings suggest that public libraries can collect PLSs through different methods, such as professional teams, researchers, crowdsourcing, etc. Library newsletters, special publications, brochures, independent online databases, and social networks are among the most effective for making PLSs accessible to users. By proposing a framework for providing PLSs in public libraries, this study helps to bridge the gap between scientific research and the public. | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | true | 382,512 |
2109.07632 | Robustness of Safety for Linear Dynamical Systems: Symbolic and
Numerical Approaches | In this paper, we study the robustness of safety properties of a linear dynamical system with respect to model uncertainties. Our paper involves three parts. In the first part, we provide symbolic (analytical) and numerical (representation based) techniques for computing the reachable set of uncertain linear systems. We further prove a relationship between the reachable set of a linear uncertain system and the maximum singular value of the uncertain dynamics matrix. Finally, we propose two heuristics to compute the robustness threshold of the system -- the maximum uncertainty that can be introduced to the system without violating the safety property. We evaluate the reachable set computation techniques, effects of singular values, and estimation of robustness threshold on two case studies from varied domains, illustrating the applicability, practicality and scalability of the artifacts, proposed in this paper, on real-world examples. We further evaluate our artifacts on several linear dynamical system benchmarks. To the best of the authors' knowledge, this is the first work to: (i) extend perturbation theory to compute reachable sets of linear uncertain systems, (ii) leverage the relationship between the reachable set of a linear system and the maximum singular values to determine the effect of uncertainties and (3) estimate the threshold of robustness that can be tolerated by the system while remaining safe. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 255,593 |
2307.13995 | Take Your Pick: Enabling Effective Personalized Federated Learning
within Low-dimensional Feature Space | Personalized federated learning (PFL) is a popular framework that allows clients to have different models to address application scenarios where clients' data are in different domains. The typical model of a client in PFL features a global encoder trained by all clients to extract universal features from the raw data and personalized layers (e.g., a classifier) trained using the client's local data. Nonetheless, due to the differences between the data distributions of different clients (aka, domain gaps), the universal features produced by the global encoder largely encompass numerous components irrelevant to a certain client's local task. Some recent PFL methods address the above problem by personalizing specific parameters within the encoder. However, these methods encounter substantial challenges attributed to the high dimensionality and non-linearity of neural network parameter space. In contrast, the feature space exhibits a lower dimensionality, providing greater intuitiveness and interpretability as compared to the parameter space. To this end, we propose a novel PFL framework named FedPick. FedPick achieves PFL in the low-dimensional feature space by selecting task-relevant features adaptively for each client from the features generated by the global encoder based on its local data distribution. It presents a more accessible and interpretable implementation of PFL compared to those methods working in the parameter space. Extensive experimental results show that FedPick could effectively select task-relevant features for each client and improve model performance in cross-domain FL. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 381,777 |
2405.00378 | Adaptive Bidirectional Displacement for Semi-Supervised Medical Image
Segmentation | Consistency learning is a central strategy to tackle unlabeled data in semi-supervised medical image segmentation (SSMIS), which enforces the model to produce consistent predictions under the perturbation. However, most current approaches solely focus on utilizing a specific single perturbation, which can only cope with limited cases, while employing multiple perturbations simultaneously is hard to guarantee the quality of consistency learning. In this paper, we propose an Adaptive Bidirectional Displacement (ABD) approach to solve the above challenge. Specifically, we first design a bidirectional patch displacement based on reliable prediction confidence for unlabeled data to generate new samples, which can effectively suppress uncontrollable regions and still retain the influence of input perturbations. Meanwhile, to enforce the model to learn the potentially uncontrollable content, a bidirectional displacement operation with inverse confidence is proposed for the labeled images, which generates samples with more unreliable information to facilitate model learning. Extensive experiments show that ABD achieves new state-of-the-art performances for SSMIS, significantly improving different baselines. Source code is available at https://github.com/chy-upc/ABD. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 450,897 |
1805.00913 | Negotiation Strategies for Agents with Ordinal Preferences | Negotiation is a very common interaction between automated agents. Many common negotiation protocols work with cardinal utilities, even though ordinal preferences, which only rank the outcomes, are easier to elicit from humans. In this work we concentrate on negotiation with ordinal preferences over a finite set of outcomes. We study an intuitive protocol for bilateral negotiation, where the two parties make offers alternately. We analyze the negotiation protocol under different settings. First, we assume that each party has full information about the other party's preference order. We provide elegant strategies that specify a sub-game perfect equilibrium for the agents. We further show how the studied negotiation protocol almost completely implements a known bargaining rule. Finally, we analyze the no information setting. We study several solution concepts that are distribution-free, and analyze both the case where neither party knows the preference order of the other party, and the case where only one party is uninformed. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | true | 96,543 |
2302.02136 | Efficient End-to-End Video Question Answering with Pyramidal Multimodal
Transformer | This paper presents a new method for end-to-end Video Question Answering (VideoQA), aside from the current popularity of using large-scale pre-training with huge feature extractors. We achieve this with a pyramidal multimodal transformer (PMT) model, which simply incorporates a learnable word embedding layer, a few convolutional and transformer layers. We use the anisotropic pyramid to fulfill video-language interactions across different spatio-temporal scales. In addition to the canonical pyramid, which includes both bottom-up and top-down pathways with lateral connections, novel strategies are proposed to decompose the visual feature stream into spatial and temporal sub-streams at different scales and implement their interactions with the linguistic semantics while preserving the integrity of local and global semantics. We demonstrate better or on-par performances with high computational efficiency against state-of-the-art methods on five VideoQA benchmarks. Our ablation study shows the scalability of our model that achieves competitive results for text-to-video retrieval by leveraging feature extractors with reusable pre-trained weights, and also the effectiveness of the pyramid. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 343,873 |
2502.12737 | Beyond Seen Data: Improving KBQA Generalization Through Schema-Guided
Logical Form Generation | Knowledge base question answering (KBQA) aims to answer user questions in natural language using rich human knowledge stored in large KBs. As current KBQA methods struggle with unseen knowledge base elements at test time,we introduce SG-KBQA: a novel model that injects schema contexts into entity retrieval and logical form generation to tackle this issue. It uses the richer semantics and awareness of the knowledge base structure provided by schema contexts to enhance generalizability. We show that SG-KBQA achieves strong generalizability, outperforming state-of-the-art models on two commonly used benchmark datasets across a variety of test settings. Our source code is available at https://github.com/gaosx2000/SG_KBQA. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 535,028 |
2406.09444 | GenDistiller: Distilling Pre-trained Language Models based on an
Autoregressive Generative Model | Pre-trained speech language models such as HuBERT and WavLM leverage unlabeled speech data for self-supervised learning and offer powerful representations for numerous downstream tasks. Despite the success of these models, their high requirements for memory and computing resource hinder their application on resource restricted devices. Therefore, this paper introduces GenDistiller, a novel knowledge distillation framework which generates the hidden representations of the pre-trained teacher model directly by a much smaller student network. The proposed method takes the previous hidden layer as history and implements a layer-by-layer prediction of the teacher model autoregressively. Experiments on SUPERB reveal the advantage of GenDistiller over the baseline distilling method without an autoregressive framework, with 33% fewer parameters, similar time consumption and better performance on most of the SUPERB tasks. Ultimately, the proposed GenDistiller reduces the size of WavLM by 82%. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 463,945 |
1911.08125 | In Search of Credible News | We study the problem of finding fake online news. This is an important problem as news of questionable credibility have recently been proliferating in social media at an alarming scale. As this is an understudied problem, especially for languages other than English, we first collect and release to the research community three new balanced credible vs. fake news datasets derived from four online sources. We then propose a language-independent approach for automatically distinguishing credible from fake news, based on a rich feature set. In particular, we use linguistic (n-gram), credibility-related (capitalization, punctuation, pronoun use, sentiment polarity), and semantic (embeddings and DBPedia data) features. Our experiments on three different testsets show that our model can distinguish credible from fake news with very high accuracy. | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | false | false | 154,100 |
1810.07768 | Feature-based visual odometry prior for real-time semi-dense stereo SLAM | Robust and fast motion estimation and mapping is a key prerequisite for autonomous operation of mobile robots. The goal of performing this task solely on a stereo pair of video cameras is highly demanding and bears conflicting objectives: on one hand, the motion has to be tracked fast and reliably, on the other hand, high-level functions like navigation and obstacle avoidance depend crucially on a complete and accurate environment representation. In this work, we propose a two-layer approach for visual odometry and SLAM with stereo cameras that runs in real-time and combines feature-based matching with semi-dense direct image alignment. Our method initializes semi-dense depth estimation, which is computationally expensive, from motion that is tracked by a fast but robust keypoint-based method. Experiments on public benchmark and proprietary datasets show that our approach is faster than state-of-the-art methods without losing accuracy and yields comparable map building capabilities. Moreover, our approach is shown to handle large inter-frame motion and illumination changes much more robustly than its direct counterparts. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 110,690 |
1401.5093 | Localization and centrality in networks | Eigenvector centrality is a common measure of the importance of nodes in a network. Here we show that under common conditions the eigenvector centrality displays a localization transition that causes most of the weight of the centrality to concentrate on a small number of nodes in the network. In this regime the measure is no longer useful for distinguishing among the remaining nodes and its efficacy as a network metric is impaired. As a remedy, we propose an alternative centrality measure based on the nonbacktracking matrix, which gives results closely similar to the standard eigenvector centrality in dense networks where the latter is well behaved, but avoids localization and gives useful results in regimes where the standard centrality fails. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 30,157 |
2008.12496 | Few-Shot Object Detection via Knowledge Transfer | Conventional methods for object detection usually require substantial amounts of training data and annotated bounding boxes. If there are only a few training data and annotations, the object detectors easily overfit and fail to generalize. It exposes the practical weakness of the object detectors. On the other hand, human can easily master new reasoning rules with only a few demonstrations using previously learned knowledge. In this paper, we introduce a few-shot object detection via knowledge transfer, which aims to detect objects from a few training examples. Central to our method is prototypical knowledge transfer with an attached meta-learner. The meta-learner takes support set images that include the few examples of the novel categories and base categories, and predicts prototypes that represent each category as a vector. Then, the prototypes reweight each RoI (Region-of-Interest) feature vector from a query image to remodels R-CNN predictor heads. To facilitate the remodeling process, we predict the prototypes under a graph structure, which propagates information of the correlated base categories to the novel categories with explicit guidance of prior knowledge that represents correlations among categories. Extensive experiments on the PASCAL VOC dataset verifies the effectiveness of the proposed method. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 193,594 |
1802.10138 | Real-World Modeling of a Pathfinding Robot Using Robot Operating System
(ROS) | This paper presents a practical approach towards implementing pathfinding algorithms on real-world and low-cost non- commercial hardware platforms. While using robotics simulation platforms as a test-bed for our algorithms we easily overlook real- world exogenous problems that are developed by external factors. Such problems involve robot wheel slips, asynchronous motors, abnormal sensory data or unstable power sources. The real-world dynamics tend to be very painful even for executing simple algorithms like a Wavefront planner or A-star search. This paper addresses designing techniques that tend to be robust as well as reusable for any hardware platforms; covering problems like controlling asynchronous drives, odometry offset issues and handling abnormal sensory feedback. The algorithm implementation medium and hardware design tools have been kept general in order to present our work as a serving platform for future researchers and robotics enthusiast working in the field of path planning robotics. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 91,464 |
1812.11725 | Total Variation with Overlapping Group Sparsity and Lp Quasinorm for
Infrared Image Deblurring under Salt-and-Pepper Noise | Because of the limitations of the infrared imaging principle and the properties of infrared imaging systems, infrared images have some drawbacks, including a lack of details, indistinct edges, and a large amount of salt-andpepper noise. To improve the sparse characteristics of the image while maintaining the image edges and weakening staircase artifacts, this paper proposes a method that uses the Lp quasinorm instead of the L1 norm and for infrared image deblurring with an overlapping group sparse total variation method. The Lp quasinorm introduces another degree of freedom, better describes image sparsity characteristics, and improves image restoration. Furthermore, we adopt the accelerated alternating direction method of multipliers and fast Fourier transform theory in the proposed method to improve the efficiency and robustness of our algorithm. Experiments show that under different conditions for blur and salt-and-pepper noise, the proposed method leads to excellent performance in terms of objective evaluation and subjective visual results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 117,618 |
2403.12906 | TexDreamer: Towards Zero-Shot High-Fidelity 3D Human Texture Generation | Texturing 3D humans with semantic UV maps remains a challenge due to the difficulty of acquiring reasonably unfolded UV. Despite recent text-to-3D advancements in supervising multi-view renderings using large text-to-image (T2I) models, issues persist with generation speed, text consistency, and texture quality, resulting in data scarcity among existing datasets. We present TexDreamer, the first zero-shot multimodal high-fidelity 3D human texture generation model. Utilizing an efficient texture adaptation finetuning strategy, we adapt large T2I model to a semantic UV structure while preserving its original generalization capability. Leveraging a novel feature translator module, the trained model is capable of generating high-fidelity 3D human textures from either text or image within seconds. Furthermore, we introduce ArTicuLated humAn textureS (ATLAS), the largest high-resolution (1024 X 1024) 3D human texture dataset which contains 50k high-fidelity textures with text descriptions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 439,384 |
1310.4707 | Emergence of Blind Areas in Information Spreading | Recently, contagion-based (disease, information, etc.) spreading on social networks has been extensively studied. In this paper, other than traditional full interaction, we propose a partial interaction based spreading model, considering that the informed individuals would transmit information to only a certain fraction of their neighbors due to the transmission ability in real-world social networks. Simulation results on three representative networks (BA, ER, WS) indicate that the spreading efficiency is highly correlated with the network heterogeneity. In addition, a special phenomenon, namely \emph{Information Blind Areas} where the network is separated by several information-unreachable clusters, will emerge from the spreading process. Furthermore, we also find that the size distribution of such information blind areas obeys power-law-like distribution, which has very similar exponent with that of site percolation. Detailed analyses show that the critical value is decreasing along with the network heterogeneity for the spreading process, which is complete the contrary to that of random selection. Moreover, the critical value in the latter process is also larger that of the former for the same network. Those findings might shed some lights in in-depth understanding the effect of network properties on information spreading. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 27,832 |
2106.07240 | Mitigating Biases in Toxic Language Detection through Invariant
Rationalization | Automatic detection of toxic language plays an essential role in protecting social media users, especially minority groups, from verbal abuse. However, biases toward some attributes, including gender, race, and dialect, exist in most training datasets for toxicity detection. The biases make the learned models unfair and can even exacerbate the marginalization of people. Considering that current debiasing methods for general natural language understanding tasks cannot effectively mitigate the biases in the toxicity detectors, we propose to use invariant rationalization (InvRat), a game-theoretic framework consisting of a rationale generator and a predictor, to rule out the spurious correlation of certain syntactic patterns (e.g., identity mentions, dialect) to toxicity labels. We empirically show that our method yields lower false positive rate in both lexical and dialectal attributes than previous debiasing methods. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 240,837 |
2011.11235 | An Empirical Study of Representation Learning for Reinforcement Learning
in Healthcare | Reinforcement Learning (RL) has recently been applied to sequential estimation and prediction problems identifying and developing hypothetical treatment strategies for septic patients, with a particular focus on offline learning with observational data. In practice, successful RL relies on informative latent states derived from sequential observations to develop optimal treatment strategies. To date, how best to construct such states in a healthcare setting is an open question. In this paper, we perform an empirical study of several information encoding architectures using data from septic patients in the MIMIC-III dataset to form representations of a patient state. We evaluate the impact of representation dimension, correlations with established acuity scores, and the treatment policies derived from them. We find that sequentially formed state representations facilitate effective policy learning in batch settings, validating a more thoughtful approach to representation learning that remains faithful to the sequential and partial nature of healthcare data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 207,768 |
2405.07638 | DoLLM: How Large Language Models Understanding Network Flow Data to
Detect Carpet Bombing DDoS | It is an interesting question Can and How Large Language Models (LLMs) understand non-language network data, and help us detect unknown malicious flows. This paper takes Carpet Bombing as a case study and shows how to exploit LLMs' powerful capability in the networking area. Carpet Bombing is a new DDoS attack that has dramatically increased in recent years, significantly threatening network infrastructures. It targets multiple victim IPs within subnets, causing congestion on access links and disrupting network services for a vast number of users. Characterized by low-rates, multi-vectors, these attacks challenge traditional DDoS defenses. We propose DoLLM, a DDoS detection model utilizes open-source LLMs as backbone. By reorganizing non-contextual network flows into Flow-Sequences and projecting them into LLMs semantic space as token embeddings, DoLLM leverages LLMs' contextual understanding to extract flow representations in overall network context. The representations are used to improve the DDoS detection performance. We evaluate DoLLM with public datasets CIC-DDoS2019 and real NetFlow trace from Top-3 countrywide ISP. The tests have proven that DoLLM possesses strong detection capabilities. Its F1 score increased by up to 33.3% in zero-shot scenarios and by at least 20.6% in real ISP traces. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | true | 453,797 |
2201.11494 | GraphTune: A Learning-based Graph Generative Model with Tunable
Structural Features | Generative models for graphs have been actively studied for decades, and they have a wide range of applications. Recently, learning-based graph generation that reproduces real-world graphs has been attracting the attention of many researchers. Although several generative models that utilize modern machine learning technologies have been proposed, conditional generation of general graphs has been less explored in the field. In this paper, we propose a generative model that allows us to tune the value of a global-level structural feature as a condition. Our model, called GraphTune, makes it possible to tune the value of any structural feature of generated graphs using Long Short Term Memory (LSTM) and a Conditional Variational AutoEncoder (CVAE). We performed comparative evaluations of GraphTune and conventional models on a real graph dataset. The evaluations show that GraphTune makes it possible to more clearly tune the value of a global-level structural feature better than conventional models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 277,314 |
2110.02311 | COVID-19 India Dataset: Parsing COVID-19 Data in Daily Health Bulletins
from States in India | While India has been one of the hotspots of COVID-19, data about the pandemic from the country has proved to be largely inaccessible at scale. Much of the data exists in unstructured form on the web, and limited aspects of such data are available through public APIs maintained manually through volunteer effort. This has proved to be difficult both in terms of ease of access to detailed data and with regards to the maintenance of manual data-keeping over time. This paper reports on our effort at automating the extraction of such data from public health bulletins with the help of a combination of classical PDF parsers and state-of-the-art machine learning techniques. In this paper, we will describe the automated data-extraction technique, the nature of the generated data, and exciting avenues of ongoing work. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 259,071 |
2104.06879 | Can Active Learning Preemptively Mitigate Fairness Issues? | Dataset bias is one of the prevailing causes of unfairness in machine learning. Addressing fairness at the data collection and dataset preparation stages therefore becomes an essential part of training fairer algorithms. In particular, active learning (AL) algorithms show promise for the task by drawing importance to the most informative training samples. However, the effect and interaction between existing AL algorithms and algorithmic fairness remain under-explored. In this paper, we study whether models trained with uncertainty-based AL heuristics such as BALD are fairer in their decisions with respect to a protected class than those trained with identically independently distributed (i.i.d.) sampling. We found a significant improvement on predictive parity when using BALD, while also improving accuracy compared to i.i.d. sampling. We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD. We found that, while addressing different fairness issues, their interaction further improves the results on most benchmarks and metrics we explored. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 230,223 |
2207.13218 | Global Incremental Flight Control for Agile Maneuvering of a Tailsitter
Flying Wing | This paper proposes a novel control law for accurate tracking of agile trajectories using a tailsitter flying wing unmanned aerial vehicle (UAV) that transitions between vertical take-off and landing (VTOL) and forward flight. The global control formulation enables maneuvering throughout the flight envelope, including uncoordinated flight with sideslip. Differential flatness of the nonlinear tailsitter dynamics with a simplified aerodynamics model is shown. Using the flatness transform, the proposed controller incorporates tracking of the position reference along with its derivatives velocity, acceleration and jerk, as well as the yaw reference and yaw rate. The inclusion of jerk and yaw rate references through an angular velocity feedforward term improves tracking of trajectories with fast-changing accelerations. The controller does not depend on extensive aerodynamic modeling but instead uses incremental nonlinear dynamic inversion (INDI) to compute control updates based on only a local input-output relation, resulting in robustness against discrepancies in the simplified aerodynamics equations. Exact inversion of the nonlinear input-output relation is achieved through the derived flatness transform. The resulting control algorithm is extensively evaluated in flight tests, where it demonstrates accurate trajectory tracking and challenging agile maneuvers, such as sideways flight and aggressive transitions while turning. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 310,226 |
2402.14646 | CoLoRA: Continuous low-rank adaptation for reduced implicit neural
modeling of parameterized partial differential equations | This work introduces reduced models based on Continuous Low Rank Adaptation (CoLoRA) that pre-train neural networks for a given partial differential equation and then continuously adapt low-rank weights in time to rapidly predict the evolution of solution fields at new physics parameters and new initial conditions. The adaptation can be either purely data-driven or via an equation-driven variational approach that provides Galerkin-optimal approximations. Because CoLoRA approximates solution fields locally in time, the rank of the weights can be kept small, which means that only few training trajectories are required offline so that CoLoRA is well suited for data-scarce regimes. Predictions with CoLoRA are orders of magnitude faster than with classical methods and their accuracy and parameter efficiency is higher compared to other neural network approaches. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 431,772 |
2107.12009 | Weakly Supervised Attention Model for RV StrainClassification from
volumetric CTPA Scans | Pulmonary embolus (PE) refers to obstruction of pulmonary arteries by blood clots. PE accounts for approximately 100,000 deaths per year in the United States alone. The clinical presentation of PE is often nonspecific, making the diagnosis challenging. Thus, rapid and accurate risk stratification is of paramount importance. High-risk PE is caused by right ventricular (RV) dysfunction from acute pressure overload, which in return can help identify which patients require more aggressive therapy. Reconstructed four-chamber views of the heart on chest CT can detect right ventricular enlargement. CT pulmonary angiography (CTPA) is the golden standard in the diagnostic workup of suspected PE. Therefore, it can link between diagnosis and risk stratification strategies. We developed a weakly supervised deep learning algorithm, with an emphasis on a novel attention mechanism, to automatically classify RV strain on CTPA. Our method is a 3D DenseNet model with integrated 3D residual attention blocks. We evaluated our model on a dataset of CTPAs of emergency department (ED) PE patients. This model achieved an area under the receiver operating characteristic curve (AUC) of 0.88 for classifying RV strain. The model showed a sensitivity of 87% and specificity of 83.7%. Our solution outperforms state-of-the-art 3D CNN networks. The proposed design allows for a fully automated network that can be trained easily in an end-to-end manner without requiring computationally intensive and time-consuming preprocessing or strenuous labeling of the data.We infer that unmarked CTPAs can be used for effective RV strain classification. This could be used as a second reader, alerting for high-risk PE patients. To the best of our knowledge, there are no previous deep learning-based studies that attempted to solve this problem. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 247,769 |
1906.00541 | Encoder-Powered Generative Adversarial Networks | We present an encoder-powered generative adversarial network (EncGAN) that is able to learn both the multi-manifold structure and the abstract features of data. Unlike the conventional decoder-based GANs, EncGAN uses an encoder to model the manifold structure and invert the encoder to generate data. This unique scheme enables the proposed model to exclude discrete features from the smooth structure modeling and learn multi-manifold data without being hindered by the disconnections. Also, as EncGAN requires a single latent space to carry the information for all the manifolds, it builds abstract features shared among the manifolds in the latent space. For an efficient computation, we formulate EncGAN using a simple regularizer, and mathematically prove its validity. We also experimentally demonstrate that EncGAN successfully learns the multi-manifold structure and the abstract features of MNIST, 3D-chair and UT-Zap50k datasets. Our analysis shows that the learned abstract features are disentangled and make a good style-transfer even when the source data is off the trained distribution. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 133,422 |
1701.07872 | Cooling Codes: Thermal-Management Coding for High-Performance
Interconnects | High temperatures have dramatic negative effects on interconnect performance and, hence, numerous techniques have been proposed to reduce the power consumption of on-chip buses. However, existing methods fall short of fully addressing the thermal challenges posed by high-performance interconnects. In this paper, we introduce new efficient coding schemes that make it possible to directly control the peak temperature of a bus by effectively cooling its hottest wires. This is achieved by avoiding state transitions on the hottest wires for as long as necessary until their temperature drops off. We also reduce the average power consumption by making sure that the total number of state transitions on all the wires is below a prescribed threshold. We show how each of these two features can be coded for separately or, alternatively, how both can be achieved at the same time. In addition, error-correction for the transmitted information can be provided while controlling the peak temperature and/or the average power consumption. In general, our cooling codes use $n > k$ wires to encode a given $k$-bit bus. One of our goals herein is to determine the minimum possible number of wires $n$ needed to encode $k$ bits while satisfying any combination of the three desired properties. We provide full theoretical analysis in each case. In particular, we show that $n = k+t+1$ suffices to cool the $t$ hottest wires, and this is the best possible. Moreover, although the proposed coding schemes make use of sophisticated tools from combinatorics, discrete geometry, linear algebra, and coding theory, the resulting encoders and decoders are fully practical. They do not require significant computational overhead and can be implemented without sacrificing a large circuit area. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 67,366 |
1610.09712 | Real-Time Image Distortion Correction: Analysis and Evaluation of
FPGA-Compatible Algorithms | Image distortion correction is a critical pre-processing step for a variety of computer vision and image processing algorithms. Standard real-time software implementations are generally not suited for direct hardware porting, so appropriated versions need to be designed in order to obtain implementations deployable on FPGAs. In this paper, hardware-compatible techniques for image distortion correction are introduced and analyzed in details. The considered solutions are compared in terms of output quality by using a geometrical-error-based approach, with particular emphasis on robustness with respect to increasing lens distortion. The required amount of hardware resources is also estimated for each considered approach. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 63,103 |
1706.08276 | Skeleton-Based Action Recognition Using Spatio-Temporal LSTM Network
with Trust Gates | Skeleton-based human action recognition has attracted a lot of research attention during the past few years. Recent works attempted to utilize recurrent neural networks to model the temporal dependencies between the 3D positional configurations of human body joints for better analysis of human activities in the skeletal data. The proposed work extends this idea to spatial domain as well as temporal domain to better analyze the hidden sources of action-related information within the human skeleton sequences in both of these domains simultaneously. Based on the pictorial structure of Kinect's skeletal data, an effective tree-structure based traversal framework is also proposed. In order to deal with the noise in the skeletal data, a new gating mechanism within LSTM module is introduced, with which the network can learn the reliability of the sequential data and accordingly adjust the effect of the input data on the updating procedure of the long-term context representation stored in the unit's memory cell. Moreover, we introduce a novel multi-modal feature fusion strategy within the LSTM unit in this paper. The comprehensive experimental results on seven challenging benchmark datasets for human action recognition demonstrate the effectiveness of the proposed method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 75,972 |
1910.11737 | Coalitional Games with Stochastic Characteristic Functions and Private
Types | The research on coalitional games has focused on how to share the reward among a coalition such that players are incentivised to collaborate together. It assumes that the (deterministic or stochastic) characteristic function is known in advance. This paper studies a new setting (a task allocation problem) where the characteristic function is not known and it is controlled by some private information from the players. Hence, the challenge here is twofold: (i) incentivize players to reveal their private information truthfully, (ii) incentivize them to collaborate together. We show that existing reward distribution mechanisms or auctions cannot solve the challenge. Hence, we propose the very first mechanism for the problem from the perspective of both mechanism design and coalitional games. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 150,868 |
2412.09946 | Enhancing Nursing and Elderly Care with Large Language Models: An
AI-Driven Framework | This paper explores the application of large language models (LLMs) in nursing and elderly care, focusing on AI-driven patient monitoring and interaction. We introduce a novel Chinese nursing dataset and implement incremental pre-training (IPT) and supervised fine-tuning (SFT) techniques to enhance LLM performance in specialized tasks. Using LangChain, we develop a dynamic nursing assistant capable of real-time care and personalized interventions. Experimental results demonstrate significant improvements, paving the way for AI-driven solutions to meet the growing demands of healthcare in aging populations. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 516,721 |
2401.06145 | Minuet: Accelerating 3D Sparse Convolutions on GPUs | Sparse Convolution (SC) is widely used for processing 3D point clouds that are inherently sparse. Different from dense convolution, SC preserves the sparsity of the input point cloud by only allowing outputs to specific locations. To efficiently compute SC, prior SC engines first use hash tables to build a kernel map that stores the necessary General Matrix Multiplication (GEMM) operations to be executed (Map step), and then use a Gather-GEMM-Scatter process to execute these GEMM operations (GMaS step). In this work, we analyze the shortcomings of prior state-of-the-art SC engines, and propose Minuet, a novel memory-efficient SC engine tailored for modern GPUs. Minuet proposes to (i) replace the hash tables used in the Map step with a novel segmented sorting double-traversed binary search algorithm that highly utilizes the on-chip memory hierarchy of GPUs, (ii) use a lightweight scheme to autotune the tile size in the Gather and Scatter operations of the GMaS step, such that to adapt the execution to the particular characteristics of each SC layer, dataset, and GPU architecture, and (iii) employ a padding-efficient GEMM grouping approach that reduces both memory padding and kernel launching overheads. Our evaluations show that Minuet significantly outperforms prior SC engines by on average $1.74\times$ (up to $2.22\times$) for end-to-end point cloud network executions. Our novel segmented sorting double-traversed binary search algorithm achieves superior speedups by $15.8\times$ on average (up to $26.8\times$) over prior SC engines in the Map step. The source code of Minuet is publicly available at https://github.com/UofT-EcoSystem/Minuet. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 421,032 |
2407.01888 | PO-MSCKF: An Efficient Visual-Inertial Odometry by Reconstructing the
Multi-State Constrained Kalman Filter with the Pose-only Theory | Efficient Visual-Inertial Odometry (VIO) is crucial for payload-constrained robots. Though modern optimization-based algorithms have achieved superior accuracy, the MSCKF-based VIO algorithms are still widely demanded for their efficient and consistent performance. As MSCKF is built upon the conventional multi-view geometry, the measured residuals are not only related to the state errors but also related to the feature position errors. To apply EKF fusion, a projection process is required to remove the feature position error from the observation model, which can lead to model and accuracy degradation. To obtain an efficient visual-inertial fusion model, while also preserving the model consistency, we propose to reconstruct the MSCKF VIO with the novel Pose-Only (PO) multi-view geometry description. In the newly constructed filter, we have modeled PO reprojection residuals, which are solely related to the motion states and thus overcome the requirements of space projection. Moreover, the new filter does not require any feature position information, which removes the computational cost and linearization errors brought in by the 3D reconstruction procedure. We have conducted comprehensive experiments on multiple datasets, where the proposed method has shown accuracy improvements and consistent performance in challenging sequences. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 469,492 |
1507.01297 | The Cost of Collaboration for Code and Art: Evidence from a Remixing
Community | In this paper, we use evidence from a remixing community to evaluate two pieces of common wisdom about collaboration. First, we test the theory that jointly produced works tend to be of higher quality than individually authored products. Second, we test the theory that collaboration improves the quality of functional works like code, but that it works less well for artistic works like images and sounds. We use data from Scratch, a large online community where hundreds of thousands of young users share and remix millions of animations and interactive games. Using peer-ratings as a measure of quality, we estimate a series of fitted regression models and find that collaborative Scratch projects tend to receive ratings that are lower than individually authored works. We also find that code-intensive collaborations are rated higher than media-intensive efforts. We conclude by discussing the limitations and implications of these findings. | true | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 44,853 |
2502.05593 | Semantic Data Augmentation Enhanced Invariant Risk Minimization for
Medical Image Domain Generalization | Deep learning has achieved remarkable success in medical image classification. However, its clinical application is often hindered by data heterogeneity caused by variations in scanner vendors, imaging protocols, and operators. Approaches such as invariant risk minimization (IRM) aim to address this challenge of out-of-distribution generalization. For instance, VIRM improves upon IRM by tackling the issue of insufficient feature support overlap, demonstrating promising potential. Nonetheless, these methods face limitations in medical imaging due to the scarcity of annotated data and the inefficiency of augmentation strategies. To address these issues, we propose a novel domain-oriented direction selector to replace the random augmentation strategy used in VIRM. Our method leverages inter-domain covariance as a guider for augmentation direction, guiding data augmentation towards the target domain. This approach effectively reduces domain discrepancies and enhances generalization performance. Experiments on a multi-center diabetic retinopathy dataset demonstrate that our method outperforms state-of-the-art approaches, particularly under limited data conditions and significant domain heterogeneity. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 531,681 |
1705.00154 | Classical Planning in Deep Latent Space: Bridging the
Subsymbolic-Symbolic Boundary | Current domain-independent, classical planners require symbolic models of the problem domain and instance as input, resulting in a knowledge acquisition bottleneck. Meanwhile, although deep learning has achieved significant success in many fields, the knowledge is encoded in a subsymbolic representation which is incompatible with symbolic systems such as planners. We propose LatPlan, an unsupervised architecture combining deep learning and classical planning. Given only an unlabeled set of image pairs showing a subset of transitions allowed in the environment (training inputs), and a pair of images representing the initial and the goal states (planning inputs), LatPlan finds a plan to the goal state in a symbolic latent space and returns a visualized plan execution. The contribution of this paper is twofold: (1) State Autoencoder, which finds a propositional state representation of the environment using a Variational Autoencoder. It generates a discrete latent vector from the images, based on which a PDDL model can be constructed and then solved by an off-the-shelf planner. (2) Action Autoencoder / Discriminator, a neural architecture which jointly finds the action symbols and the implicit action models (preconditions/effects), and provides a successor function for the implicit graph search. We evaluate LatPlan using image-based versions of 3 planning domains: 8-puzzle, Towers of Hanoi and LightsOut. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 72,634 |
2206.11250 | Leveraging RGB-D Data with Cross-Modal Context Mining for Glass Surface
Detection | Glass surfaces are becoming increasingly ubiquitous as modern buildings tend to use a lot of glass panels. This, however, poses substantial challenges to the operations of autonomous systems such as robots, self-driving cars, and drones, as these glass panels can become transparent obstacles to navigation. Existing works attempt to exploit various cues, including glass boundary context or reflections, as priors. However, they are all based on input RGB images. We observe that the transmission of 3D depth sensor light through glass surfaces often produces blank regions in the depth maps, which can offer additional insights to complement the RGB image features for glass surface detection. In this work, we first propose a large-scale RGB-D glass surface detection dataset, \textit{RGB-D GSD}, for rigorous experiments and future research. It contains 3,009 images, paired with precise annotations, offering a wide range of real-world RGB-D glass surface categories. We then propose a novel glass surface detection framework combining RGB and depth information, with two novel modules: a cross-modal context mining (CCM) module to adaptively learn individual and mutual context features from RGB and depth information, and a depth-missing aware attention (DAA) module to explicitly exploit spatial locations where missing depths occur to help detect the presence of glass surfaces. Experimental results show that our proposed model outperforms state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 304,211 |
2412.08592 | Adaptive Principal Components Allocation with the
$\ell_{2,g}$-regularized Gaussian Graphical Model for Efficient Fine-Tuning
Large Models | In this work, we propose a novel Parameter-Efficient Fine-Tuning (PEFT) approach based on Gaussian Graphical Models (GGMs), marking the first application of GGMs to PEFT tasks, to the best of our knowledge. The proposed method utilizes the $\ell_{2,g}$-norm to effectively select critical parameters and capture global dependencies. The resulting non-convex optimization problem is efficiently solved using a Block Coordinate Descent (BCD) algorithm. Experimental results on the GLUE benchmark [24] for fine-tuning RoBERTa-Base [18] demonstrate the effectiveness of the proposed approach, achieving competitive performance with significantly fewer trainable parameters. The code for this work is available at: https://github.com/jzheng20/Course projects.git. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 516,159 |
1210.1689 | A New Quantum Data Processing Inequality | Quantum data processing inequality bounds the set of bipartite states that can be generated by two far apart parties under local operations; Having access to a bipartite state as a resource, two parties cannot locally transform it to another bipartite state with a mutual information greater than that of the resource state. But due to the additivity of quantum mutual information under tensor product, the data processing inequality gives no bound when the parties are provided with arbitrary number of copies of the resource state. In this paper we introduce a measure of correlation on bipartite quantum states, called maximal correlation, that is not additive and gives the same number when computed for multiple copies. Then by proving a data processing inequality for this measure, we find a bound on the set of states that can be generated under local operations even when an arbitrary number of copies of the resource state is available. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 18,957 |
1703.00089 | A Joint Identification Approach for Argumentative Writing Revisions | Prior work on revision identification typically uses a pipeline method: revision extraction is first conducted to identify the locations of revisions and revision classification is then conducted on the identified revisions. Such a setting propagates the errors of the revision extraction step to the revision classification step. This paper proposes an approach that identifies the revision location and the revision type jointly to solve the issue of error propagation. It utilizes a sequence representation of revisions and conducts sequence labeling for revision identification. A mutation-based approach is utilized to update identification sequences. Results demonstrate that our proposed approach yields better performance on both revision location extraction and revision type classification compared to a pipeline baseline. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 69,103 |
2307.14384 | HyperFed: Hyperbolic Prototypes Exploration with Consistent Aggregation
for Non-IID Data in Federated Learning | Federated learning (FL) collaboratively models user data in a decentralized way. However, in the real world, non-identical and independent data distributions (non-IID) among clients hinder the performance of FL due to three issues, i.e., (1) the class statistics shifting, (2) the insufficient hierarchical information utilization, and (3) the inconsistency in aggregating clients. To address the above issues, we propose HyperFed which contains three main modules, i.e., hyperbolic prototype Tammes initialization (HPTI), hyperbolic prototype learning (HPL), and consistent aggregation (CA). Firstly, HPTI in the server constructs uniformly distributed and fixed class prototypes, and shares them with clients to match class statistics, further guiding consistent feature representation for local clients. Secondly, HPL in each client captures the hierarchical information in local data with the supervision of shared class prototypes in the hyperbolic model space. Additionally, CA in the server mitigates the impact of the inconsistent deviations from clients to server. Extensive studies of four datasets prove that HyperFed is effective in enhancing the performance of FL under the non-IID set. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 381,916 |
2103.04039 | ClassSR: A General Framework to Accelerate Super-Resolution Networks by
Data Characteristic | We aim at accelerating super-resolution (SR) networks on large images (2K-8K). The large images are usually decomposed into small sub-images in practical usages. Based on this processing, we found that different image regions have different restoration difficulties and can be processed by networks with different capacities. Intuitively, smooth areas are easier to super-solve than complex textures. To utilize this property, we can adopt appropriate SR networks to process different sub-images after the decomposition. On this basis, we propose a new solution pipeline -- ClassSR that combines classification and SR in a unified framework. In particular, it first uses a Class-Module to classify the sub-images into different classes according to restoration difficulties, then applies an SR-Module to perform SR for different classes. The Class-Module is a conventional classification network, while the SR-Module is a network container that consists of the to-be-accelerated SR network and its simplified versions. We further introduce a new classification method with two losses -- Class-Loss and Average-Loss to produce the classification results. After joint training, a majority of sub-images will pass through smaller networks, thus the computational cost can be significantly reduced. Experiments show that our ClassSR can help most existing methods (e.g., FSRCNN, CARN, SRResNet, RCAN) save up to 50% FLOPs on DIV8K datasets. This general framework can also be applied in other low-level vision tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 223,500 |
2411.04204 | Online Budgeted Matching with General Bids | Online Budgeted Matching (OBM) is a classic problem with important applications in online advertising, online service matching, revenue management, and beyond. Traditional online algorithms typically assume a small bid setting, where the maximum bid-to-budget ratio (\kappa) is infinitesimally small. While recent algorithms have tried to address scenarios with non-small or general bids, they often rely on the Fractional Last Matching (FLM) assumption, which allows for accepting partial bids when the remaining budget is insufficient. This assumption, however, does not hold for many applications with indivisible bids. In this paper, we remove the FLM assumption and tackle the open problem of OBM with general bids. We first establish an upper bound of 1-\kappa on the competitive ratio for any deterministic online algorithm. We then propose a novel meta algorithm, called MetaAd, which reduces to different algorithms with first known provable competitive ratios parameterized by the maximum bid-to-budget ratio \kappa \in [0, 1]. As a by-product, we extend MetaAd to the FLM setting and get provable competitive algorithms. Finally, we apply our competitive analysis to the design learning-augmented algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 506,178 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.