id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1605.02693 | Inference of High-dimensional Autoregressive Generalized Linear Models | Vector autoregressive models characterize a variety of time series in which linear combinations of current and past observations can be used to accurately predict future observations. For instance, each element of an observation vector could correspond to a different node in a network, and the parameters of an autoregressive model would correspond to the impact of the network structure on the time series evolution. Often these models are used successfully in practice to learn the structure of social, epidemiological, financial, or biological neural networks. However, little is known about statistical guarantees on estimates of such models in non-Gaussian settings. This paper addresses the inference of the autoregressive parameters and associated network structure within a generalized linear model framework that includes Poisson and Bernoulli autoregressive processes. At the heart of this analysis is a sparsity-regularized maximum likelihood estimator. While sparsity-regularization is well-studied in the statistics and machine learning communities, those analysis methods cannot be applied to autoregressive generalized linear models because of the correlations and potential heteroscedasticity inherent in the observations. Sample complexity bounds are derived using a combination of martingale concentration inequalities and modern empirical process techniques for dependent random variables. These bounds, which are supported by several simulation studies, characterize the impact of various network parameters on estimator performance. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 55,661 |
2404.03309 | Optimistic Online Non-stochastic Control via FTRL | This paper brings the concept of ``optimism" to the new and promising framework of online Non-stochastic Control (NSC). Namely, we study how NSC can benefit from a prediction oracle of unknown quality responsible for forecasting future costs. The posed problem is first reduced to an optimistic learning with delayed feedback problem, which is handled through the Optimistic Follow the Regularized Leader (OFTRL) algorithmic family. This reduction enables the design of \texttt{OptFTRL-C}, the first Disturbance Action Controller (DAC) with optimistic policy regret bounds. These new bounds are commensurate with the oracle's accuracy, ranging from $\mathcal{O}(1)$ for perfect predictions to the order-optimal $\mathcal{O}(\sqrt{T})$ even when all predictions fail. By addressing the challenge of incorporating untrusted predictions into online control, this work contributes to the advancement of the NSC framework and paves the way toward effective and robust learning-based controllers. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 444,204 |
2209.01217 | A Method for Discovering Novel Classes in Tabular Data | In Novel Class Discovery (NCD), the goal is to find new classes in an unlabeled set given a labeled set of known but different classes. While NCD has recently gained attention from the community, no framework has yet been proposed for heterogeneous tabular data, despite being a very common representation of data. In this paper, we propose TabularNCD, a new method for discovering novel classes in tabular data. We show a way to extract knowledge from already known classes to guide the discovery process of novel classes in the context of tabular data which contains heterogeneous variables. A part of this process is done by a new method for defining pseudo labels, and we follow recent findings in Multi-Task Learning to optimize a joint objective function. Our method demonstrates that NCD is not only applicable to images but also to heterogeneous tabular data. Extensive experiments are conducted to evaluate our method and demonstrate its effectiveness against 3 competitors on 7 diverse public classification datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 315,811 |
1902.08440 | Robust Graph Embedding with Noisy Link Weights | We propose $\beta$-graph embedding for robustly learning feature vectors from data vectors and noisy link weights. A newly introduced empirical moment $\beta$-score reduces the influence of contamination and robustly measures the difference between the underlying correct expected weights of links and the specified generative model. The proposed method is computationally tractable; we employ a minibatch-based efficient stochastic algorithm and prove that this algorithm locally minimizes the empirical moment $\beta$-score. We conduct numerical experiments on synthetic and real-world datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 122,193 |
2012.10744 | GlocalNet: Class-aware Long-term Human Motion Synthesis | Synthesis of long-term human motion skeleton sequences is essential to aid human-centric video generation with potential applications in Augmented Reality, 3D character animations, pedestrian trajectory prediction, etc. Long-term human motion synthesis is a challenging task due to multiple factors like, long-term temporal dependencies among poses, cyclic repetition across poses, bi-directional and multi-scale dependencies among poses, variable speed of actions, and a large as well as partially overlapping space of temporal pose variations across multiple class/types of human activities. This paper aims to address these challenges to synthesize a long-term (> 6000 ms) human motion trajectory across a large variety of human activity classes (>50). We propose a two-stage activity generation method to achieve this goal, where the first stage deals with learning the long-term global pose dependencies in activity sequences by learning to synthesize a sparse motion trajectory while the second stage addresses the generation of dense motion trajectories taking the output of the first stage. We demonstrate the superiority of the proposed method over SOTA methods using various quantitative evaluation metrics on publicly available datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 212,430 |
2109.13232 | Contributions to Large Scale Bayesian Inference and Adversarial Machine
Learning | The rampant adoption of ML methodologies has revealed that models are usually adopted to make decisions without taking into account the uncertainties in their predictions. More critically, they can be vulnerable to adversarial examples. Thus, we believe that developing ML systems that take into account predictive uncertainties and are robust against adversarial examples is a must for critical, real-world tasks. We start with a case study in retailing. We propose a robust implementation of the Nerlove-Arrow model using a Bayesian structural time series model. Its Bayesian nature facilitates incorporating prior information reflecting the manager's views, which can be updated with relevant data. However, this case adopted classical Bayesian techniques, such as the Gibbs sampler. Nowadays, the ML landscape is pervaded with neural networks and this chapter also surveys current developments in this sub-field. Then, we tackle the problem of scaling Bayesian inference to complex models and large data regimes. In the first part, we propose a unifying view of two different Bayesian inference algorithms, Stochastic Gradient Markov Chain Monte Carlo (SG-MCMC) and Stein Variational Gradient Descent (SVGD), leading to improved and efficient novel sampling schemes. In the second part, we develop a framework to boost the efficiency of Bayesian inference in probabilistic models by embedding a Markov chain sampler within a variational posterior approximation. After that, we present an alternative perspective on adversarial classification based on adversarial risk analysis, and leveraging the scalable Bayesian approaches from chapter 2. In chapter 4 we turn to reinforcement learning, introducing Threatened Markov Decision Processes, showing the benefits of accounting for adversaries in RL while the agent learns. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 257,575 |
2412.15967 | Self-Supervised Radiograph Anatomical Region Classification -- How Clean
Is Your Real-World Data? | Modern deep learning-based clinical imaging workflows rely on accurate labels of the examined anatomical region. Knowing the anatomical region is required to select applicable downstream models and to effectively generate cohorts of high quality data for future medical and machine learning research efforts. However, this information may not be available in externally sourced data or generally contain data entry errors. To address this problem, we show the effectiveness of self-supervised methods such as SimCLR and BYOL as well as supervised contrastive deep learning methods in assigning one of 14 anatomical region classes in our in-house dataset of 48,434 skeletal radiographs. We achieve a strong linear evaluation accuracy of 96.6% with a single model and 97.7% using an ensemble approach. Furthermore, only a few labeled instances (1% of the training set) suffice to achieve an accuracy of 92.2%, enabling usage in low-label and thus low-resource scenarios. Our model can be used to correct data entry mistakes: a follow-up analysis of the test set errors of our best-performing single model by an expert radiologist identified 35% incorrect labels and 11% out-of-domain images. When accounted for, the radiograph anatomical region labelling performance increased -- without and with an ensemble, respectively -- to a theoretical accuracy of 98.0% and 98.8%. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 519,324 |
2405.01745 | Large Language Models for UAVs: Current State and Pathways to the Future | Unmanned Aerial Vehicles (UAVs) have emerged as a transformative technology across diverse sectors, offering adaptable solutions to complex challenges in both military and civilian domains. Their expanding capabilities present a platform for further advancement by integrating cutting-edge computational tools like Artificial Intelligence (AI) and Machine Learning (ML) algorithms. These advancements have significantly impacted various facets of human life, fostering an era of unparalleled efficiency and convenience. Large Language Models (LLMs), a key component of AI, exhibit remarkable learning and adaptation capabilities within deployed environments, demonstrating an evolving form of intelligence with the potential to approach human-level proficiency. This work explores the significant potential of integrating UAVs and LLMs to propel the development of autonomous systems. We comprehensively review LLM architectures, evaluating their suitability for UAV integration. Additionally, we summarize the state-of-the-art LLM-based UAV architectures and identify novel opportunities for LLM embedding within UAV frameworks. Notably, we focus on leveraging LLMs to refine data analysis and decision-making processes, specifically for enhanced spectral sensing and sharing in UAV applications. Furthermore, we investigate how LLM integration expands the scope of existing UAV applications, enabling autonomous data processing, improved decision-making, and faster response times in emergency scenarios like disaster response and network restoration. Finally, we highlight crucial areas for future research that are critical for facilitating the effective integration of LLMs and UAVs. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 451,488 |
1811.03214 | Facial Landmark Detection for Manga Images | The topic of facial landmark detection has been widely covered for pictures of human faces, but it is still a challenge for drawings. Indeed, the proportions and symmetry of standard human faces are not always used for comics or mangas. The personal style of the author, the limitation of colors, etc. makes the landmark detection on faces in drawings a difficult task. Detecting the landmarks on manga images will be useful to provide new services for easily editing the character faces, estimating the character emotions, or generating automatically some animations such as lip or eye movements. This paper contains two main contributions: 1) a new landmark annotation model for manga faces, and 2) a deep learning approach to detect these landmarks. We use the "Deep Alignment Network", a multi stage architecture where the first stage makes an initial estimation which gets refined in further stages. The first results show that the proposed method succeed to accurately find the landmarks in more than 80% of the cases. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 112,781 |
2307.02036 | Convex Optimal Power Flow Based on Power Injection-based Equations and
Its Application in Bipolar DC Distribution Network | Optimal power flow (OPF) is a fundamental tool for analyzing the characteristics of bipolar DC distribution network (DCDN). However, existing OPF models face challenges in reflecting the power distribution and exchange of bipolar DCDN directly since its decision variables are voltage and current. This paper addresses this issue by establishing a convex OPF model that can be used for the planning and operation of bipolar DCDN. First, the power flow characteristics of bipolar DCDN are revealed through power injection-based equations, upon which the original OPF model is established. Next, the original OPF model undergoes a transformation into a convex OPF model based on second-order cone programming (SOCP) through variable substitution, secondorder cone relaxation, McCormick relaxation, and first-order Taylor expansion, respectively. Finally, the sequence bound tightening algorithm (STBA) is employed to tighten the boundaries of McCormick envelopes in each iteration to ensure the exactness of the convex OPF model. The effectiveness of this novel OPF model for bipolar DCDN is verified through two case studies, i.e., capacity configuration of distributed generation (DG) and operation optimization of bipolar DCDN. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 377,561 |
1611.03934 | Personalized Donor-Recipient Matching for Organ Transplantation | Organ transplants can improve the life expectancy and quality of life for the recipient but carries the risk of serious post-operative complications, such as septic shock and organ rejection. The probability of a successful transplant depends in a very subtle fashion on compatibility between the donor and the recipient but current medical practice is short of domain knowledge regarding the complex nature of recipient-donor compatibility. Hence a data-driven approach for learning compatibility has the potential for significant improvements in match quality. This paper proposes a novel system (ConfidentMatch) that is trained using data from electronic health records. ConfidentMatch predicts the success of an organ transplant (in terms of the 3 year survival rates) on the basis of clinical and demographic traits of the donor and recipient. ConfidentMatch captures the heterogeneity of the donor and recipient traits by optimally dividing the feature space into clusters and constructing different optimal predictive models to each cluster. The system controls the complexity of the learned predictive model in a way that allows for assuring more granular and confident predictions for a larger number of potential recipient-donor pairs, thereby ensuring that predictions are "personalized" and tailored to individual characteristics to the finest possible granularity. Experiments conducted on the UNOS heart transplant dataset show the superiority of the prognostic value of ConfidentMatch to other competing benchmarks; ConfidentMatch can provide predictions of success with 95% confidence for 5,489 patients of a total population of 9,620 patients, which corresponds to 410 more patients than the most competitive benchmark algorithm (DeepBoost). | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 63,759 |
2202.12361 | RescueNet: A High Resolution UAV Semantic Segmentation Benchmark Dataset
for Natural Disaster Damage Assessment | Recent advancements in computer vision and deep learning techniques have facilitated notable progress in scene understanding, thereby assisting rescue teams in achieving precise damage assessment. In this paper, we present RescueNet, a meticulously curated high-resolution post-disaster dataset that includes detailed classification and semantic segmentation annotations. This dataset aims to facilitate comprehensive scene understanding in the aftermath of natural disasters. RescueNet comprises post-disaster images collected after Hurricane Michael, obtained using Unmanned Aerial Vehicles (UAVs) from multiple impacted regions. The uniqueness of RescueNet lies in its provision of high-resolution post-disaster imagery, accompanied by comprehensive annotations for each image. Unlike existing datasets that offer annotations limited to specific scene elements such as buildings, RescueNet provides pixel-level annotations for all classes, including buildings, roads, pools, trees, and more. Furthermore, we evaluate the utility of the dataset by implementing state-of-the-art segmentation models on RescueNet, demonstrating its value in enhancing existing methodologies for natural disaster damage assessment. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 282,199 |
2304.01114 | Associating Spatially-Consistent Grouping with Text-supervised Semantic
Segmentation | In this work, we investigate performing semantic segmentation solely through the training on image-sentence pairs. Due to the lack of dense annotations, existing text-supervised methods can only learn to group an image into semantic regions via pixel-insensitive feedback. As a result, their grouped results are coarse and often contain small spurious regions, limiting the upper-bound performance of segmentation. On the other hand, we observe that grouped results from self-supervised models are more semantically consistent and break the bottleneck of existing methods. Motivated by this, we introduce associate self-supervised spatially-consistent grouping with text-supervised semantic segmentation. Considering the part-like grouped results, we further adapt a text-supervised model from image-level to region-level recognition with two core designs. First, we encourage fine-grained alignment with a one-way noun-to-region contrastive loss, which reduces the mismatched noun-region pairs. Second, we adopt a contextually aware masking strategy to enable simultaneous recognition of all grouped regions. Coupled with spatially-consistent grouping and region-adapted recognition, our method achieves 59.2% mIoU and 32.4% mIoU on Pascal VOC and Pascal Context benchmarks, significantly surpassing the state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 355,943 |
1211.5353 | Faster Compact Top-k Document Retrieval | An optimal index solving top-k document retrieval [Navarro and Nekrich, SODA12] takes O(m + k) time for a pattern of length m, but its space is at least 80n bytes for a collection of n symbols. We reduce it to 1.5n to 3n bytes, with O(m+(k+log log n) log log n) time, on typical texts. The index is up to 25 times faster than the best previous compressed solutions, and requires at most 5% more space in practice (and in some cases as little as one half). Apart from replacing classical by compressed data structures, our main idea is to replace suffix tree sampling by frequency thresholding to achieve compression. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | 19,880 |
1905.09434 | Automated Process Planning for Turning: A Feature-Free Approach | Turning is the most commonly available and least expensive machining operation, in terms of both machine-hour rates and tool insert prices. A practical CNC process planner has to maximize the utilization of turning, not only to attain precision requirements for turnable surfaces, but also to minimize the machining cost, while non-turnable features can be left for other processes such as milling. Most existing methods rely on separation of surface features and lack guarantees when analyzing complex parts with interacting features. In a previous study, we demonstrated successful implementation of a feature-free milling process planner based on configuration space methods used for spatial reasoning and AI search for planning. This paper extends the feature-free method to include turning process planning. It opens up the opportunity for seamless integration of turning actions into a mill-turn process planner that can handle arbitrarily complex shapes with or without a priori knowledge of feature semantics. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 131,730 |
2411.17982 | HI-SLAM2: Geometry-Aware Gaussian SLAM for Fast Monocular Scene
Reconstruction | We present HI-SLAM2, a geometry-aware Gaussian SLAM system that achieves fast and accurate monocular scene reconstruction using only RGB input. Existing Neural SLAM or 3DGS-based SLAM methods often trade off between rendering quality and geometry accuracy, our research demonstrates that both can be achieved simultaneously with RGB input alone. The key idea of our approach is to enhance the ability for geometry estimation by combining easy-to-obtain monocular priors with learning-based dense SLAM, and then using 3D Gaussian splatting as our core map representation to efficiently model the scene. Upon loop closure, our method ensures on-the-fly global consistency through efficient pose graph bundle adjustment and instant map updates by explicitly deforming the 3D Gaussian units based on anchored keyframe updates. Furthermore, we introduce a grid-based scale alignment strategy to maintain improved scale consistency in prior depths for finer depth details. Through extensive experiments on Replica, ScanNet, and ScanNet++, we demonstrate significant improvements over existing Neural SLAM methods and even surpass RGB-D-based methods in both reconstruction and rendering quality. The project page and source code will be made available at https://hi-slam2.github.io/. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 511,676 |
2103.12940 | Exercise with Social Robots: Companion or Coach? | In this paper, we investigate the roles that social robots can take in physical exercise with human partners. In related work, robots or virtual intelligent agents take the role of a coach or instructor whereas in other approaches they are used as motivational aids. These are two "paradigms", so to speak, within the small but growing area of robots for social exercise. We designed an online questionnaire to test whether the preferred role in which people want to see robots would be the companion or the coach. The questionnaire asks people to imagine working out with a robot with the help of three utilized questionnaires: (1) CART-Q which is used for judging coach-athlete relationships, (2) the mind perception questionnaire and (3) the System Usability Scale (SUS). We present the methodology, some preliminary results as well as our intended future work on personal robots for coaching. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 226,327 |
2502.02091 | Efficient Dynamic Scene Editing via 4D Gaussian-based Static-Dynamic
Separation | Recent 4D dynamic scene editing methods require editing thousands of 2D images used for dynamic scene synthesis and updating the entire scene with additional training loops, resulting in several hours of processing to edit a single dynamic scene. Therefore, these methods are not scalable with respect to the temporal dimension of the dynamic scene (i.e., the number of timesteps). In this work, we propose an efficient dynamic scene editing method that is more scalable in terms of temporal dimension. To achieve computational efficiency, we leverage a 4D Gaussian representation that models a 4D dynamic scene by combining static 3D Gaussians with a Hexplane-based deformation field, which handles dynamic information. We then perform editing solely on the static 3D Gaussians, which is the minimal but sufficient component required for visual editing. To resolve the misalignment between the edited 3D Gaussians and the deformation field potentially resulting from the editing process, we additionally conducted a refinement stage using a score distillation mechanism. Extensive editing results demonstrate that our method is efficient, reducing editing time by more than half compared to existing methods, while achieving high editing quality that better follows user instructions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 530,178 |
2403.11836 | Stochastic Mean Field Game for Strategic Bidding of Consumers in
Congested Distribution Networks | The rapid increase of photovoltaic cells, batteries, and Electric Vehicles (EVs) in electric grids can result in congested distribution networks. An alternative to enhancing network capacity is a redispatch market, allowing Distribution System Operators (DSOs) to alleviate congested networks by asking energy consumers to change their consumption schedules. However, energy consumers can anticipate the redispatch market outcomes and strategically adjust their bids in the day-ahead market. This behaviour, known as increase-decrease gaming, can result in the exacerbation of congestion and enable energy consumers to gain windfall profits from the DSO. In this paper, we consider a two-stage problem consisting of the day-ahead market (first stage) and redispatch market (second stage). Then, we model the increase-decrease game for large populations of energy consumers in power networks using a stochastic mean field game approach. The agents (energy consumers) maximize their individual welfare in the day-ahead market with anticipation of the redispatch market. We show that all the agent strategies are ordered along their utilities and there exists a unique Nash equilibrium for this game. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 438,887 |
2307.05036 | Neural-Symbolic Recommendation with Graph-Enhanced Information | The recommendation system is not only a problem of inductive statistics from data but also a cognitive task that requires reasoning ability. The most advanced graph neural networks have been widely used in recommendation systems because they can capture implicit structured information from graph-structured data. However, like most neural network algorithms, they only learn matching patterns from a perception perspective. Some researchers use user behavior for logic reasoning to achieve recommendation prediction from the perspective of cognitive reasoning, but this kind of reasoning is a local one and ignores implicit information on a global scale. In this work, we combine the advantages of graph neural networks and propositional logic operations to construct a neuro-symbolic recommendation model with both global implicit reasoning ability and local explicit logic reasoning ability. We first build an item-item graph based on the principle of adjacent interaction and use graph neural networks to capture implicit information in global data. Then we transform user behavior into propositional logic expressions to achieve recommendations from the perspective of cognitive reasoning. Extensive experiments on five public datasets show that our proposed model outperforms several state-of-the-art methods, source code is avaliable at [https://github.com/hanzo2020/GNNLR]. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 378,592 |
2112.02471 | Grappling with the Scale of Born-Digital Government Publications: Toward
Pipelines for Processing and Searching Millions of PDFs | Official government publications are key sources for understanding the history of societies. Web publishing has fundamentally changed the scale and processes by which governments produce and disseminate information. Significantly, a range of web archiving programs have captured massive troves of government publications. For example, hundreds of millions of unique U.S. Government documents posted to the web in PDF form have been archived by libraries to date. Yet, these PDFs remain largely unutilized and understudied in part due to the challenges surrounding the development of scalable pipelines for searching and analyzing them. This paper utilizes a Library of Congress dataset of 1,000 government PDFs in order to offer initial approaches for searching and analyzing these PDFs at scale. In addition to demonstrating the utility of PDF metadata, this paper offers computationally-efficient machine learning approaches to search and discovery that utilize the PDFs' textual and visual features as well. We conclude by detailing how these methods can be operationalized at scale in order to support systems for navigating millions of PDFs. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | 269,853 |
2407.02719 | Boosting Biomedical Concept Extraction by Rule-Based Data Augmentation | Document-level biomedical concept extraction is the task of identifying biomedical concepts mentioned in a given document. Recent advancements have adapted pre-trained language models for this task. However, the scarcity of domain-specific data and the deviation of concepts from their canonical names often hinder these models' effectiveness. To tackle this issue, we employ MetaMapLite, an existing rule-based concept mapping system, to generate additional pseudo-annotated data from PubMed and PMC. The annotated data are used to augment the limited training data. Through extensive experiments, this study demonstrates the utility of a manually crafted concept mapping tool for training a better concept extraction model. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 469,850 |
1908.00920 | Identification of gatekeeper diseases on the way to cardiovascular
mortality | Multimorbidity, the co-occurrence of two or more chronic diseases such as diabetes, obesity or cardiovascular diseases in one patient, is a frequent phenomenon. To make care more efficient, it is of relevance to understand how different diseases condition each other over the life time of a patient. However, most of our current knowledge on such patient careers is either confined to narrow time spans or specific (sets of) diseases. Here, we present a population-wide analysis of long-term patient trajectories by clustering them according to their disease history observed over 17 years. When patients acquire new diseases, their cluster assignment might change. A health trajectory can then be described by a temporal sequence of disease clusters. From the transitions between clusters we construct an age-dependent multilayer network of disease clusters. Random walks on this multilayer network provide a more precise model for the time evolution of multimorbid health states when compared to models that cluster patients based on single diseases. Our results can be used to identify decisive events that potentially determine the future disease trajectory of a patient. We find that for elderly patients the cluster network consists of regions of low, medium and high in-hospital mortality. Diagnoses of diabetes and hypertension are found to strongly increase the likelihood for patients to subsequently move into the high-mortality region later in life. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 140,630 |
1609.02746 | INSIGHT-1 at SemEval-2016 Task 4: Convolutional Neural Networks for
Sentiment Classification and Quantification | This paper describes our deep learning-based approach to sentiment analysis in Twitter as part of SemEval-2016 Task 4. We use a convolutional neural network to determine sentiment and participate in all subtasks, i.e. two-point, three-point, and five-point scale sentiment classification and two-point and five-point scale sentiment quantification. We achieve competitive results for two-point scale sentiment classification and quantification, ranking fifth and a close fourth (third and second by alternative metrics) respectively despite using only pre-trained embeddings that contain no sentiment information. We achieve good performance on three-point scale sentiment classification, ranking eighth out of 35, while performing poorly on five-point scale sentiment classification and quantification. An error analysis reveals that this is due to low expressiveness of the model to capture negative sentiment as well as an inability to take into account ordinal information. We propose improvements in order to address these and other issues. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 60,778 |
2003.13985 | DeepLPF: Deep Local Parametric Filters for Image Enhancement | Digital artists often improve the aesthetic quality of digital photographs through manual retouching. Beyond global adjustments, professional image editing programs provide local adjustment tools operating on specific parts of an image. Options include parametric (graduated, radial filters) and unconstrained brush tools. These highly expressive tools enable a diverse set of local image enhancements. However, their use can be time consuming, and requires artistic capability. State-of-the-art automated image enhancement approaches typically focus on learning pixel-level or global enhancements. The former can be noisy and lack interpretability, while the latter can fail to capture fine-grained adjustments. In this paper, we introduce a novel approach to automatically enhance images using learned spatially local filters of three different types (Elliptical Filter, Graduated Filter, Polynomial Filter). We introduce a deep neural network, dubbed Deep Local Parametric Filters (DeepLPF), which regresses the parameters of these spatially localized filters that are then automatically applied to enhance the image. DeepLPF provides a natural form of model regularization and enables interpretable, intuitive adjustments that lead to visually pleasing results. We report on multiple benchmarks and show that DeepLPF produces state-of-the-art performance on two variants of the MIT-Adobe-5K dataset, often using a fraction of the parameters required for competing methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 170,380 |
2409.04194 | Towards Privacy-Preserving Relational Data Synthesis via Probabilistic
Relational Models | Probabilistic relational models provide a well-established formalism to combine first-order logic and probabilistic models, thereby allowing to represent relationships between objects in a relational domain. At the same time, the field of artificial intelligence requires increasingly large amounts of relational training data for various machine learning tasks. Collecting real-world data, however, is often challenging due to privacy concerns, data protection regulations, high costs, and so on. To mitigate these challenges, the generation of synthetic data is a promising approach. In this paper, we solve the problem of generating synthetic relational data via probabilistic relational models. In particular, we propose a fully-fledged pipeline to go from relational database to probabilistic relational model, which can then be used to sample new synthetic relational data points from its underlying probability distribution. As part of our proposed pipeline, we introduce a learning algorithm to construct a probabilistic relational model from a given relational database. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | true | false | 486,324 |
1410.0265 | A Data- and Workload-Aware Algorithm for Range Queries Under
Differential Privacy | We describe a new algorithm for answering a given set of range queries under $\epsilon$-differential privacy which often achieves substantially lower error than competing methods. Our algorithm satisfies differential privacy by adding noise that is adapted to the input data and to the given query set. We first privately learn a partitioning of the domain into buckets that suit the input data well. Then we privately estimate counts for each bucket, doing so in a manner well-suited for the given query set. Since the performance of the algorithm depends on the input database, we evaluate it on a wide range of real datasets, showing that we can achieve the benefits of data-dependence on both "easy" and "hard" databases. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 36,449 |
2501.06988 | Fully Differentiable Boundary Element Solver for Hydrodynamic
Sensitivity Analysis of Wave-Structure Interactions | Accurately predicting wave-structure interactions is critical for the effective design and analysis of marine structures. This is typically achieved using solvers that employ the boundary element method (BEM), which relies on linear potential flow theory. Precise estimation of the sensitivity of these interactions is equally important for system-level applications such as design optimization. Current BEM solvers are unable to provide these sensitivities as they are not differentiable. To address these challenges, we have developed a fully-differentiable BEM solver for marine hydrodynamics, capable of calculating diffraction and radiation coefficients, and their derivatives with high accuracy. This new solver implements both direct and indirect BEM formulations and incorporates two Green's function expressions, offering a trade-off between accuracy and computational speed. Gradients are computed using reverse-mode automatic differentiation (AD) within the Julia programming language. As a first case study, we analyze two identical floating spheres, evaluating gradients with respect to physical dimensions, inter-sphere distance, and wave frequency. Validation studies demonstrate excellent agreement between AD-computed gradients and finite-difference results. In a second case study, we leverage AD-computed gradients to optimize the mechanical power production of a pair of wave energy converters (WECs). This represents the first application of gradients in WEC power optimization, offering valuable insights into hydrodynamic interactions and advancing the understanding of layout optimization for maximum efficiency. Beyond power optimization, the differentiable BEM solver highlights the potential of AD for offshore design studies. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 524,217 |
2205.01698 | On Circuit Depth Scaling For Quantum Approximate Optimization | Variational quantum algorithms are the centerpiece of modern quantum programming. These algorithms involve training parameterized quantum circuits using a classical co-processor, an approach adapted partly from classical machine learning. An important subclass of these algorithms, designed for combinatorial optimization on currrent quantum hardware, is the quantum approximate optimization algorithm (QAOA). It is known that problem density - a problem constraint to variable ratio - induces under-parametrization in fixed depth QAOA. Density dependent performance has been reported in the literature, yet the circuit depth required to achieve fixed performance (henceforth called critical depth) remained unknown. Here, we propose a predictive model, based on a logistic saturation conjecture for critical depth scaling with respect to density. Focusing on random instances of MAX-2-SAT, we test our predictive model against simulated data with up to 15 qubits. We report the average critical depth, required to attain a success probability of 0.7, saturates at a value of 10 for densities beyond 4. We observe the predictive model to describe the simulated data within a $3\sigma$ confidence interval. Furthermore, based on the model, a linear trend for the critical depth with respect problem size is recovered for the range of 5 to 15 qubits. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 294,684 |
2202.10560 | Moment Matching Deep Contrastive Latent Variable Models | In the contrastive analysis (CA) setting, machine learning practitioners are specifically interested in discovering patterns that are enriched in a target dataset as compared to a background dataset generated from sources of variation irrelevant to the task at hand. For example, a biomedical data analyst may seek to understand variations in genomic data only present among patients with a given disease as opposed to those also present in healthy control subjects. Such scenarios have motivated the development of contrastive latent variable models to isolate variations unique to these target datasets from those shared across the target and background datasets, with current state of the art models based on the variational autoencoder (VAE) framework. However, previously proposed models do not explicitly enforce the constraints on latent variables underlying CA, potentially leading to the undesirable leakage of information between the two sets of latent variables. Here we propose the moment matching contrastive VAE (MM-cVAE), a reformulation of the VAE for CA that uses the maximum mean discrepancy to explicitly enforce two crucial latent variable constraints underlying CA. On three challenging CA tasks we find that our method outperforms the previous state-of-the-art both qualitatively and on a set of quantitative metrics. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 281,564 |
2306.00519 | DiffInDScene: Diffusion-based High-Quality 3D Indoor Scene Generation | We present DiffInDScene, a novel framework for tackling the problem of high-quality 3D indoor scene generation, which is challenging due to the complexity and diversity of the indoor scene geometry. Although diffusion-based generative models have previously demonstrated impressive performance in image generation and object-level 3D generation, they have not yet been applied to room-level 3D generation due to their computationally intensive costs. In DiffInDScene, we propose a cascaded 3D diffusion pipeline that is efficient and possesses strong generative performance for Truncated Signed Distance Function (TSDF). The whole pipeline is designed to run on a sparse occupancy space in a coarse-to-fine fashion. Inspired by KinectFusion's incremental alignment and fusion of local TSDF volumes, we propose a diffusion-based SDF fusion approach that iteratively diffuses and fuses local TSDF volumes, facilitating the generation of an entire room environment. The generated results demonstrate that our work is capable to achieve high-quality room generation directly in three-dimensional space, starting from scratch. In addition to the scene generation, the final part of DiffInDScene can be used as a post-processing module to refine the 3D reconstruction results from multi-view stereo. According to the user study, the mesh quality generated by our DiffInDScene can even outperform the ground truth mesh provided by ScanNet. Please visit our project page for the latest progress and demonstrations: https://github.com/AkiraHero/diffindscene. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 370,042 |
1603.04565 | A Generalized Labeled Multi-Bernoulli Filter for Maneuvering Targets | A multiple maneuvering target system can be viewed as a Jump Markov System (JMS) in the sense that the target movement can be modeled using different motion models where the transition between the motion models by a particular target follows a Markov chain probability rule. This paper describes a Generalized Labelled Multi-Bernoulli (GLMB) filter for tracking maneuvering targets whose movement can be modeled via such a JMS. The proposed filter is validated with two linear and nonlinear maneuvering target tracking examples. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 53,260 |
0710.0244 | Theoretical Engineering and Satellite Comlink of a PTVD-SHAM System | This paper focuses on super helical memory system's design, 'Engineering, Architectural and Satellite Communications' as a theoretical approach of an invention-model to 'store time-data'. The current release entails three concepts: 1- an in-depth theoretical physics engineering of the chip including its, 2- architectural concept based on VLSI methods, and 3- the time-data versus data-time algorithm. The 'Parallel Time Varying & Data Super-helical Access Memory' (PTVD-SHAM), possesses a waterfall effect in its architecture dealing with the process of voltage output-switch into diverse logic and quantum states described as 'Boolean logic & image-logic', respectively. Quantum dot computational methods are explained by utilizing coiled carbon nanotubes (CCNTs) and CNT field effect transistors (CNFETs) in the chip's architecture. Quantum confinement, categorized quantum well substrate, and B-field flux involvements are discussed in theory. Multi-access of coherent sequences of 'qubit addressing' in any magnitude, gained as pre-defined, here e.g., the 'big O notation' asymptotically confined into singularity while possessing a magnitude of 'infinity' for the orientation of array displacement. Gaussian curvature of k<0 versus k'>(k<0) is debated in aim of specifying the 2D electron gas characteristics, data storage system for defining short and long time cycles for different CCNT diameters where space-time continuum is folded by chance for the particle. Precise pre/post data timing for, e.g., seismic waves before earthquake mantle-reach event occurrence, including time varying self-clocking devices in diverse geographic locations for radar systems is illustrated in the Subsections of the paper. The theoretical fabrication process, electromigration between chip's components is discussed as well. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 719 |
2009.09255 | City-Scale Visual Place Recognition with Deep Local Features Based on
Multi-Scale Ordered VLAD Pooling | Visual place recognition is the task of recognizing a place depicted in an image based on its pure visual appearance without metadata. In visual place recognition, the challenges lie upon not only the changes in lighting conditions, camera viewpoint, and scale but also the characteristic of scene-level images and the distinct features of the area. To resolve these challenges, one must consider both the local discriminativeness and the global semantic context of images. On the other hand, the diversity of the datasets is also particularly important to develop more general models and advance the progress of the field. In this paper, we present a fully-automated system for place recognition at a city-scale based on content-based image retrieval. Our main contributions to the community lie in three aspects. Firstly, we take a comprehensive analysis of visual place recognition and sketch out the unique challenges of the task compared to general image retrieval tasks. Next, we propose yet a simple pooling approach on top of convolutional neural network activations to embed the spatial information into the image representation vector. Finally, we introduce new datasets for place recognition, which are particularly essential for application-based research. Furthermore, throughout extensive experiments, various issues in both image retrieval and place recognition are analyzed and discussed to give some insights into improving the performance of retrieval models in reality. The dataset used in this paper can be found at https://github.com/canhld94/Daejeon520 | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 196,514 |
2012.01300 | Learning from others' mistakes: Avoiding dataset biases without modeling
them | State-of-the-art natural language processing (NLP) models often learn to model dataset biases and surface form correlations instead of features that target the intended underlying task. Previous work has demonstrated effective methods to circumvent these issues when knowledge of the bias is available. We consider cases where the bias issues may not be explicitly identified, and show a method for training models that learn to ignore these problematic correlations. Our approach relies on the observation that models with limited capacity primarily learn to exploit biases in the dataset. We can leverage the errors of such limited capacity models to train a more robust model in a product of experts, thus bypassing the need to hand-craft a biased model. We show the effectiveness of this method to retain improvements in out-of-distribution settings even if no particular bias is targeted by the biased model. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 209,383 |
2109.09667 | On Generalization in Coreference Resolution | While coreference resolution is defined independently of dataset domain, most models for performing coreference resolution do not transfer well to unseen domains. We consolidate a set of 8 coreference resolution datasets targeting different domains to evaluate the off-the-shelf performance of models. We then mix three datasets for training; even though their domain, annotation guidelines, and metadata differ, we propose a method for jointly training a single model on this heterogeneous data mixture by using data augmentation to account for annotation differences and sampling to balance the data quantities. We find that in a zero-shot setting, models trained on a single dataset transfer poorly while joint training yields improved overall performance, leading to better generalization in coreference resolution models. This work contributes a new benchmark for robust coreference resolution and multiple new state-of-the-art results. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 256,353 |
2407.08306 | Let Network Decide What to Learn: Symbolic Music Understanding Model
Based on Large-scale Adversarial Pre-training | As a crucial aspect of Music Information Retrieval (MIR), Symbolic Music Understanding (SMU) has garnered significant attention for its potential to assist both musicians and enthusiasts in learning and creating music. Recently, pre-trained language models have been widely adopted in SMU due to the substantial similarities between symbolic music and natural language, as well as the ability of these models to leverage limited music data effectively. However, some studies have shown the common pre-trained methods like Mask Language Model (MLM) may introduce bias issues like racism discrimination in Natural Language Process (NLP) and affects the performance of downstream tasks, which also happens in SMU. This bias often arises when masked tokens cannot be inferred from their context, forcing the model to overfit the training set instead of generalizing. To address this challenge, we propose Adversarial-MidiBERT for SMU, which adaptively determines what to mask during MLM via a masker network, rather than employing random masking. By avoiding the masking of tokens that are difficult to infer from context, our model is better equipped to capture contextual structures and relationships, rather than merely conforming to the training data distribution. We evaluate our method across four SMU tasks, and our approach demonstrates excellent performance in all cases. The code for our model is publicly available at https://github.com/RS2002/Adversarial-MidiBERT. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 472,118 |
2410.15176 | Beyond Pruning Criteria: The Dominant Role of Fine-Tuning and Adaptive
Ratios in Neural Network Robustness | Deep neural networks (DNNs) excel in tasks like image recognition and natural language processing, but their increasing complexity complicates deployment in resource-constrained environments and increases susceptibility to adversarial attacks. While traditional pruning methods reduce model size, they often compromise the network's ability to withstand subtle perturbations. This paper challenges the conventional emphasis on weight importance scoring as the primary determinant of a pruned network's performance. Through extensive analysis, including experiments conducted on CIFAR, Tiny-ImageNet, and various network architectures, we demonstrate that effective fine-tuning plays a dominant role in enhancing both performance and adversarial robustness, often surpassing the impact of the chosen pruning criteria. To address this issue, we introduce Module Robust Sensitivity, a novel metric that adaptively adjusts the pruning ratio for each network layer based on its sensitivity to adversarial perturbations. By integrating this metric into the pruning process, we develop a stable algorithm that maintains accuracy and robustness simultaneously. Experimental results show that our approach enables the practical deployment of more robust and efficient neural networks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 500,405 |
2407.04947 | FreeCompose: Generic Zero-Shot Image Composition with Diffusion Prior | We offer a novel approach to image composition, which integrates multiple input images into a single, coherent image. Rather than concentrating on specific use cases such as appearance editing (image harmonization) or semantic editing (semantic image composition), we showcase the potential of utilizing the powerful generative prior inherent in large-scale pre-trained diffusion models to accomplish generic image composition applicable to both scenarios. We observe that the pre-trained diffusion models automatically identify simple copy-paste boundary areas as low-density regions during denoising. Building on this insight, we propose to optimize the composed image towards high-density regions guided by the diffusion prior. In addition, we introduce a novel maskguided loss to further enable flexible semantic image composition. Extensive experiments validate the superiority of our approach in achieving generic zero-shot image composition. Additionally, our approach shows promising potential in various tasks, such as object removal and multiconcept customization. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 470,757 |
2003.05739 | Technical report: Training Mixture Density Networks with full covariance
matrices | Mixture Density Networks are a tried and tested tool for modelling conditional probability distributions. As such, they constitute a great baseline for novel approaches to this problem. In the standard formulation, an MDN takes some input and outputs parameters for a Gaussian mixture model with restrictions on the mixture components' covariance. Since covariance between random variables is a central issue in the conditional modeling problems we were investigating, I derived and implemented an MDN formulation with unrestricted covariances. It is likely that this has been done before, but I could not find any resources online. For this reason, I have documented my approach in the form of this technical report, in hopes that it may be useful to others facing a similar situation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 167,938 |
2303.13439 | Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video
Generators | Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets. In this paper, we introduce a new task of zero-shot text-to-video generation and propose a low-cost approach (without any training or optimization) by leveraging the power of existing text-to-image synthesis methods (e.g., Stable Diffusion), making them suitable for the video domain. Our key modifications include (i) enriching the latent codes of the generated frames with motion dynamics to keep the global scene and the background time consistent; and (ii) reprogramming frame-level self-attention using a new cross-frame attention of each frame on the first frame, to preserve the context, appearance, and identity of the foreground object. Experiments show that this leads to low overhead, yet high-quality and remarkably consistent video generation. Moreover, our approach is not limited to text-to-video synthesis but is also applicable to other tasks such as conditional and content-specialized video generation, and Video Instruct-Pix2Pix, i.e., instruction-guided video editing. As experiments show, our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data. Our code will be open sourced at: https://github.com/Picsart-AI-Research/Text2Video-Zero . | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 353,663 |
2009.05481 | A deep-learning model for evaluating and predicting the impact of
lockdown policies on COVID-19 cases | To reduce the impact of COVID-19 pandemic most countries have implemented several counter-measures to control the virus spread including school and border closing, shutting down public transport and workplace and restrictions on gathering. In this research work, we propose a deep-learning prediction model for evaluating and predicting the impact of various lockdown policies on daily COVID-19 cases. This is achieved by first clustering countries having similar lockdown policies, then training a prediction model based on the daily cases of the countries in each cluster along with the data describing their lockdown policies. Once the model is trained, it can used to evaluate several scenarios associated to lockdown policies and investigate their impact on the predicted COVID cases. Our evaluation experiments, conducted on Qatar as a use case, shows that the proposed approach achieved competitive prediction accuracy. Additionally, our findings highlighted that lifting restrictions particularly on schools and border opening would result in significant increase in the number of cases during the study period. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 195,334 |
0812.5032 | A New Clustering Algorithm Based Upon Flocking On Complex Network | We have proposed a model based upon flocking on a complex network, and then developed two clustering algorithms on the basis of it. In the algorithms, firstly a \textit{k}-nearest neighbor (knn) graph as a weighted and directed graph is produced among all data points in a dataset each of which is regarded as an agent who can move in space, and then a time-varying complex network is created by adding long-range links for each data point. Furthermore, each data point is not only acted by its \textit{k} nearest neighbors but also \textit{r} long-range neighbors through fields established in space by them together, so it will take a step along the direction of the vector sum of all fields. It is more important that these long-range links provides some hidden information for each data point when it moves and at the same time accelerate its speed converging to a center. As they move in space according to the proposed model, data points that belong to the same class are located at a same position gradually, whereas those that belong to different classes are away from one another. Consequently, the experimental results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the rates of convergence of clustering algorithms are fast enough. Moreover, the comparison with other algorithms also provides an indication of the effectiveness of the proposed approach. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 2,862 |
2009.12939 | Strong replica symmetry for high-dimensional disordered log-concave
Gibbs measures | We consider a generic class of log-concave, possibly random, (Gibbs) measures. We prove the concentration of an infinite family of order parameters called multioverlaps. Because they completely parametrise the quenched Gibbs measure of the system, this implies a simple representation of the asymptotic Gibbs measures, as well as the decoupling of the variables in a strong sense. These results may prove themselves useful in several contexts. In particular in machine learning and high-dimensional inference, log-concave measures appear in convex empirical risk minimisation, maximum a-posteriori inference or M-estimation. We believe that they may be applicable in establishing some type of "replica symmetric formulas" for the free energy, inference or generalisation error in such settings. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 197,580 |
1805.12032 | Identifying and Understanding User Reactions to Deceptive and Trusted
Social News Sources | In the age of social news, it is important to understand the types of reactions that are evoked from news sources with various levels of credibility. In the present work we seek to better understand how users react to trusted and deceptive news sources across two popular, and very different, social media platforms. To that end, (1) we develop a model to classify user reactions into one of nine types, such as answer, elaboration, and question, etc, and (2) we measure the speed and the type of reaction for trusted and deceptive news sources for 10.8M Twitter posts and 6.2M Reddit comments. We show that there are significant differences in the speed and the type of reactions between trusted and deceptive news sources on Twitter, but far smaller differences on Reddit. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 99,076 |
2410.06460 | A Benchmark on Directed Graph Representation Learning in Hardware
Designs | To keep pace with the rapid advancements in design complexity within modern computing systems, directed graph representation learning (DGRL) has become crucial, particularly for encoding circuit netlists, computational graphs, and developing surrogate models for hardware performance prediction. However, DGRL remains relatively unexplored, especially in the hardware domain, mainly due to the lack of comprehensive and user-friendly benchmarks. This study presents a novel benchmark comprising five hardware design datasets and 13 prediction tasks spanning various levels of circuit abstraction. We evaluate 21 DGRL models, employing diverse graph neural networks and graph transformers (GTs) as backbones, enhanced by positional encodings (PEs) tailored for directed graphs. Our results highlight that bidirected (BI) message passing neural networks (MPNNs) and robust PEs significantly enhance model performance. Notably, the top-performing models include PE-enhanced GTs interleaved with BI-MPNN layers and BI-Graph Isomorphism Network, both surpassing baselines across the 13 tasks. Additionally, our investigation into out-of-distribution (OOD) performance emphasizes the urgent need to improve OOD generalization in DGRL models. This benchmark, implemented with a modular codebase, streamlines the evaluation of DGRL models for both hardware and ML practitioners | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 496,209 |
1906.12018 | Pruned Landmark Labeling Meets Vertex Centric Computation: A
Surprisingly Happy Marriage! | In this paper, we study how the Pruned Landmark Labeling (PPL) algorithm can be parallelized in a scalable fashion, producing the same results as the sequential algorithm. More specifically, we parallelize using a Vertex-Centric (VC) computational model on a modern SIMD powered multicore architecture. We design a new VC-PLL algorithm that resolves the apparent mismatch between the inherent sequential dependence of the PLL algorithm and the Vertex- Centric (VC) computing model. Furthermore, we introduce a novel batch execution model for VC computation and the BVC-PLL algorithm to reduce the computational inefficiency in VC-PLL. Quite surprisingly, the theoretical analysis reveals that under a reasonable assumption, BVC-PLL has lower computational and memory access costs than PLL and indicates it may run faster than PLL as a sequential algorithm. We also demonstrate how BVC-PLL algorithm can be extended to handle directed graphs and weighted graphs and how it can utilize the hierarchical parallelism on a modern parallel computing architecture. Extensive experiments on real-world graphs not only show the sequential BVC-PLL can run more than two times faster than the original PLL, but also demonstrates its parallel efficiency and scalability. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 136,820 |
2106.01733 | A Novel SEPIC-\'Cuk Based High Gain Solar Micro-Inverter for Grid
Integration | Solar micro-inverters are becoming increasingly popular as they are modular, and they posses the capability of extracting maximum available power from the individual photovoltaic (PV) modules of a solar array. For realizing micro-inverters single stage transformer-less topologies are preferred as they offer better power evacuation efficacy. A SEPIC-\'Cuk based transformer-less micro-inverter, having only one high frequency switch and four line frequency switches, is proposed in this paper. The proposed converter can be employed to interface a 35 V PV module to a 220 V single phase ac grid. As a very high gain is required to be achieved for the converter, it is made to operate in discontinuous conduction mode (DCM) for all possible operating conditions. Since the ground of the each PV modules is connected to the ground of the utility, there is no possibility of leakage current flow between the module and the utility. Detailed simulation studies are carried out to ascertain the efficacy of the proposed micro-inverter. A laboratory prototype of the inverter is fabricated, and detailed experimental studies are carried out to confirm the viability of the proposed scheme. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 238,610 |
2102.01551 | An Open-Source Modular Robotic System for Telepresence and Remote
Disinfection | In a pandemic contact between humans needs to be avoided wherever possible. Robots can take over an increasing number of tasks to protect people from being exposed to others. One such task is the disinfection of environments in which infection spread is particularly likely or bears increased risks. It has been shown that UVC light is effective in neutralizing a variety of pathogens, among others the virus causing COVID-19, SARS-CoV-2. Another function which can reduce the need for physical proximity between humans is interaction via telepresence, i.e., the remote embodiment of a person controlling the robot. This work presents a modular mobile robot for telepresence and disinfection with UVC lamps. Both operation modes are supported by adaptable autonomy navigation features for facilitating efficient task execution. The platform's primary contributions are its hardware and software design, which combine consumer-grade components and 3D-printed mounting with open-source software frameworks. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 218,150 |
2109.11936 | Towards Autonomous Visual Navigation in Arable Fields | Autonomous navigation of a robot in agricultural fields is essential for every task from crop monitoring to weed management and fertilizer application. Many current approaches rely on accurate GPS, however, such technology is expensive and also prone to failure (e.g. through lack of coverage). As such, autonomous navigation through sensors that can interpret their environment (such as cameras) is important to achieve the goal of autonomy in agriculture. In this paper, we introduce a purely vision-based navigation scheme that is able to reliably guide the robot through row-crop fields without manual intervention. Independent of any global localization or mapping, this approach is able to accurately follow the crop-rows and switch between the rows, only using onboard cameras. With the help of a novel crop-row detection and a novel crop-row switching technique, our navigation scheme can be deployed in a wide range of fields with different canopy types in various growth stages with limited parameter tuning, creating a crop agnostic navigation approach. We have extensively evaluated our approach in three different fields under various illumination conditions using our agricultural robotic platform (BonnBot-I). For navigation, our approach is evaluated on five crop types and achieves an average navigation accuracy of 3.82cm relative to manual teleoperation. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 257,105 |
2302.05929 | SCLIFD:Supervised Contrastive Knowledge Distillation for Incremental
Fault Diagnosis under Limited Fault Data | Intelligent fault diagnosis has made extraordinary advancements currently. Nonetheless, few works tackle class-incremental learning for fault diagnosis under limited fault data, i.e., imbalanced and long-tailed fault diagnosis, which brings about various notable challenges. Initially, it is difficult to extract discriminative features from limited fault data. Moreover, a well-trained model must be retrained from scratch to classify the samples from new classes, thus causing a high computational burden and time consumption. Furthermore, the model may suffer from catastrophic forgetting when trained incrementally. Finally, the model decision is biased toward the new classes due to the class imbalance. The problems can consequently lead to performance degradation of fault diagnosis models. Accordingly, we introduce a supervised contrastive knowledge distillation for incremental fault diagnosis under limited fault data (SCLIFD) framework to address these issues, which extends the classical incremental classifier and representation learning (iCaRL) framework from three perspectives. Primarily, we adopt supervised contrastive knowledge distillation (KD) to enhance its representation learning capability under limited fault data. Moreover, we propose a novel prioritized exemplar selection method adaptive herding (AdaHerding) to restrict the increase of the computational burden, which is also combined with KD to alleviate catastrophic forgetting. Additionally, we adopt the cosine classifier to mitigate the adverse impact of class imbalance. We conduct extensive experiments on simulated and real-world industrial processes under different imbalance ratios. Experimental results show that our SCLIFD outperforms the existing methods by a large margin. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 345,229 |
2305.01828 | ns-3 Implementation of Sub-Terahertz and Millimeter Wave Drop-based NYU
Channel Model (NYUSIM) | The next generation of wireless networks will use sub-THz frequencies alongside mmWave frequencies to enable multi-Gbps and low-latency applications. To enable different verticals and use cases, engineers must take a holistic approach to build, analyze, and study different parts of the network and the interplay among the lower and higher layers of the protocol stack. It is of paramount importance to accurately characterize the radio propagation in diverse scenarios such as urban microcell (UMi), urban macrocell (UMa), rural macrocell (RMa), indoor hotspot (InH), and indoor factory (InF) for a wide range of frequencies. The 3GPP statistical channel model (SCM) is oversimplified and restricted to the frequency range of 0.5-100 GHz. Thus, to overcome these limitations, this paper presents a detailed implementation of the drop-based NYU channel model (NYUSIM) for the frequency range of 0.5-150 GHz for the UMi, UMa, RMa, InH, and InF scenarios. NYUSIM allows researchers to design and evaluate new algorithms and protocols for future sub-THz wireless networks in ns-3. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 361,813 |
1810.00513 | The $\log\log$ growth of channel capacity for nondispersive nonlinear
optical fiber channel in intermediate power range. Extension of the model | In our previous paper [Phys. Rev. E 95, 062122 (2017)] we considered the optical channel modelled by the nonlinear Schr\"odinger equation with zero dispersion and additive Gaussian noise. We found per-sample channel capacity rof this model. In the present paper we extend per-sample model by introducing the initial signal dependence on time and the output signal detection procedure. The proposed model is a closer approximation of the realistic communication link than the per-sample model where there is no dependence of the initial signal on time. For the proposed model we found the correlators of the output signal both analytically and numerically. Using these correlators we built the conditional probability density function. Then we calculated an entropy of the output signal, a conditional entropy, and the mutual information. Maximizing the mutual information we found the optimal input signal distribution, channel capacity? and their dependence on the shape or the initial signal in the time domain for the intermediate power range. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 109,204 |
2408.06277 | Multi-marginal Schr\"odinger Bridges with Iterative Reference Refinement | Practitioners often aim to infer an unobserved population trajectory using sample snapshots at multiple time points. E.g., given single-cell sequencing data, scientists would like to learn how gene expression changes over a cell's life cycle. But sequencing any cell destroys that cell. So we can access data for any particular cell only at a single time point, but we have data across many cells. The deep learning community has recently explored using Schr\"odinger bridges (SBs) and their extensions in similar settings. However, existing methods either (1) interpolate between just two time points or (2) require a single fixed reference dynamic (often set to Brownian motion within SBs). But learning piecewise from adjacent time points can fail to capture long-term dependencies. And practitioners are typically able to specify a model family for the reference dynamic but not the exact values of the parameters within it. So we propose a new method that (1) learns the unobserved trajectories from sample snapshots across multiple time points and (2) requires specification only of a family of reference dynamics, not a single fixed one. We demonstrate the advantages of our method on simulated and real data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 480,142 |
1705.07311 | Personalized Ranking for Context-Aware Venue Suggestion | Making personalized and context-aware suggestions of venues to the users is very crucial in venue recommendation. These suggestions are often based on matching the venues' features with the users' preferences, which can be collected from previously visited locations. In this paper we present a novel user-modeling approach which relies on a set of scoring functions for making personalized suggestions of venues based on venues content and reviews as well as users context. Our experiments, conducted on the dataset of the TREC Contextual Suggestion Track, prove that our methodology outperforms state-of-the-art approaches by a significant margin. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 73,806 |
1902.06420 | Moment-Based Bound on Peak-to-Average Power Ratio and Reduction with
Unitary Matrix | Reducing Peak-to-Average Power Ratio (PAPR) is a significant task in OFDM systems. To evaluate the efficiency of PAPR-reducing methods, the complementary cumulative distribution function (CCDF) of PAPR is often used. In the situation where the central limit theorem can be applied, an approximate form of the CCDF has been obtained. On the other hand, in general situations, the bound of the CCDF has been obtained under some assumptions. In this paper, we derive the bound of the CCDF with no assumption about modulation schemes. Therefore, our bound can be applied with any codewords and that our bound is written with fourth moments of codewords. Further, we propose a method to reduce the bound with unitary matrices. With this method, it is shown that our bound is closely related to the CCDF of PAPR. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 121,761 |
2207.08944 | Robustar: Interactive Toolbox Supporting Precise Data Annotation for
Robust Vision Learning | We introduce the initial release of our software Robustar, which aims to improve the robustness of vision classification machine learning models through a data-driven perspective. Building upon the recent understanding that the lack of machine learning model's robustness is the tendency of the model's learning of spurious features, we aim to solve this problem from its root at the data perspective by removing the spurious features from the data before training. In particular, we introduce a software that helps the users to better prepare the data for training image classification models by allowing the users to annotate the spurious features at the pixel level of images. To facilitate this process, our software also leverages recent advances to help identify potential images and pixels worthy of attention and to continue the training with newly annotated data. Our software is hosted at the GitHub Repository https://github.com/HaohanWang/Robustar. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 308,737 |
2210.05841 | Towards Optimal Primary- and Secondary-control Design for Networks with
Generators and Inverters | For power grids predominantly featuring large synchronous generators (SGs), there exists a significant body of work bridging optimization and control tasks. A generic workflow in such efforts entails: characterizing the steady state of control algorithms and SG dynamics; assessing the optimality of the resulting operating point with respect to an optimal dispatch task; and prescribing control parameters to ensure that (under reasonable ambient perturbations) the considered control nudges the system steady state to optimality. Well studied instances of the aforementioned approach include designing: i) automatic generation control (AGC) participation factors to ensure economic optimality, and ii) governor frequency-droop slopes to ensure power sharing. Recognizing that future power grids will feature a diverse mix of SGs and inverter-based resources (IBRs) with varying control structures, this work examines the different steps of the optimization-control workflow for this context. Considering a representative model of active power-frequency dynamics of IBRs and SGs, a characterization of steady state is put forth (with and without secondary frequency control). Conditions on active-power droop slopes and AGC participation factors are then derived to ascertain desired power sharing and ensure economically optimal operation under varying power demands. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 323,024 |
1809.04176 | Phaseless Subspace Tracking | This work takes the first steps towards solving the "phaseless subspace tracking" (PST) problem. PST involves recovering a time sequence of signals (or images) from phaseless linear projections of each signal under the following structural assumption: the signal sequence is generated from a much lower dimensional subspace (than the signal dimension) and this subspace can change over time, albeit gradually. It can be simply understood as a dynamic (time-varying subspace) extension of the low-rank phase retrieval problem studied in recent work. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 107,489 |
1404.1981 | Iterative Detection and LDPC Decoding Algorithms for MIMO Systems in
Block-Fading Channels | We propose an Iterative Detection and Decoding (IDD) scheme with Low Density Parity Check (LDPC) codes for Multiple Input Multiple Output (MIMO) systems for block-fading $F = 2$ and fast fading Rayleigh channels. An IDD receiver with soft information processing that exploits the code structure and the behaviour of the log likelihood ratios (LLR)'s is developed. Minimum Mean Square Error (MMSE) with Successive Interference Cancellation (SIC) and with Parallel Interference Cancellation (PIC) schemes are considered. The soft \textit{a posteriori} output of the decoder in a block-fading channel with Root-Check LDPC codes has allowed us to create a new strategy to improve the Bit Error Rate (BER) of a MIMO IDD scheme. Our proposed strategy in some scenarios has resulted in up to 3dB of gain in terms of BER for block-fading channels and up to 1dB in fast fading channels. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 32,162 |
2110.05172 | K-Wav2vec 2.0: Automatic Speech Recognition based on Joint Decoding of
Graphemes and Syllables | Wav2vec 2.0 is an end-to-end framework of self-supervised learning for speech representation that is successful in automatic speech recognition (ASR), but most of the work on the topic has been developed with a single language: English. Therefore, it is unclear whether the self-supervised framework is effective in recognizing other languages with different writing systems, such as Korean which uses the Hangul having a unique writing system. In this paper, we present K-Wav2Vec 2.0, which is a modified version of Wav2vec 2.0 designed for Korean automatic speech recognition by exploring and optimizing various factors of the original Wav2vec 2.0. In fine-tuning, we propose a multi-task hierarchical architecture to reflect the Korean writing structure. Moreover, a joint decoder is applied to alleviate the problem of words existing outside of the vocabulary. In pre-training, we attempted the cross-lingual transfer of the pre-trained model by further pre-training the English Wav2vec 2.0 on a Korean dataset, considering limited resources. Our experimental results demonstrate that the proposed method yields the best performance on both Korean ASR datasets: Ksponspeech (a large-scale Korean speech corpus) and Clovacall (a call-based dialog corpus). Further pre-training is also effective in language adaptation, leading to large improvements without additional data. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 260,190 |
2306.10724 | Partial Hypernetworks for Continual Learning | Hypernetworks mitigate forgetting in continual learning (CL) by generating task-dependent weights and penalizing weight changes at a meta-model level. Unfortunately, generating all weights is not only computationally expensive for larger architectures, but also, it is not well understood whether generating all model weights is necessary. Inspired by latent replay methods in CL, we propose partial weight generation for the final layers of a model using hypernetworks while freezing the initial layers. With this objective, we first answer the question of how many layers can be frozen without compromising the final performance. Through several experiments, we empirically show that the number of layers that can be frozen is proportional to the distributional similarity in the CL stream. Then, to demonstrate the effectiveness of hypernetworks, we show that noisy streams can significantly impact the performance of latent replay methods, leading to increased forgetting when features from noisy experiences are replayed with old samples. In contrast, partial hypernetworks are more robust to noise by maintaining accuracy on previous experiences. Finally, we conduct experiments on the split CIFAR-100 and TinyImagenet benchmarks and compare different versions of partial hypernetworks to latent replay methods. We conclude that partial weight generation using hypernetworks is a promising solution to the problem of forgetting in neural networks. It can provide an effective balance between computation and final test accuracy in CL streams. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 374,338 |
2302.10723 | A Cooperative Multi-Agent Probabilistic Framework for Search and Track
Missions | In this work a robust and scalable cooperative multi-agent searching and tracking framework is proposed. Specifically, we study the problem of cooperative searching and tracking of multiple moving targets by a group of autonomous mobile agents with limited sensing capabilities. We assume that the actual number of targets present is not known a priori and that target births/deaths can occur anywhere inside the surveillance region thus efficient search strategies are required to detect and track as many targets as possible. To address the aforementioned challenges we recursively compute and propagate in time the searching-and-tracking (SAT) density. Using the SAT-density, we then develop decentralized cooperative look-ahead strategies for efficient searching and tracking of an unknown number of targets inside a bounded surveillance area. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 346,916 |
2403.05652 | What is different between these datasets? | The performance of machine learning models relies heavily on the quality of input data, yet real-world applications often face significant data-related challenges. A common issue arises when curating training data or deploying models: two datasets from the same domain may exhibit differing distributions. While many techniques exist for detecting such distribution shifts, there is a lack of comprehensive methods to explain these differences in a human-understandable way beyond opaque quantitative metrics. To bridge this gap, we propose a versatile toolbox of interpretable methods for comparing datasets. Using a variety of case studies, we demonstrate the effectiveness of our approach across diverse data modalities -- including tabular data, text data, images, time series signals -- in both low and high-dimensional settings. These methods complement existing techniques by providing actionable and interpretable insights to better understand and address distribution shifts. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 436,094 |
1907.01957 | End-to-End Speech Recognition with High-Frame-Rate Features Extraction | State-of-the-art end-to-end automatic speech recognition (ASR) extracts acoustic features from input speech signal every 10 ms which corresponds to a frame rate of 100 frames/second. In this report, we investigate the use of high-frame-rate features extraction in end-to-end ASR. High frame rates of 200 and 400 frames/second are used in the features extraction and provide additional information for end-to-end ASR. The effectiveness of high-frame-rate features extraction is evaluated independently and in combination with speed perturbation based data augmentation. Experiments performed on two speech corpora, Wall Street Journal (WSJ) and CHiME-5, show that using high-frame-rate features extraction yields improved performance for end-to-end ASR, both independently and in combination with speed perturbation. On WSJ corpus, the relative reduction of word error rate (WER) yielded by high-frame-rate features extraction independently and in combination with speed perturbation are up to 21.3% and 24.1%, respectively. On CHiME-5 corpus, the corresponding relative WER reductions are up to 2.8% and 7.9%, respectively, on the test data recorded by microphone arrays and up to 11.8% and 21.2%, respectively, on the test data recorded by binaural microphones. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 137,484 |
2411.08750 | Optimal Transport-Based Displacement Interpolation with Data
Augmentation for Reduced Order Modeling of Nonlinear Dynamical Systems | We present a novel reduced-order Model (ROM) that leverages optimal transport (OT) theory and displacement interpolation to enhance the representation of nonlinear dynamics in complex systems. While traditional ROM techniques face challenges in this scenario, especially when data (i.e., observational snapshots) is limited, our method addresses these issues by introducing a data augmentation strategy based on OT principles. The proposed framework generates interpolated solutions tracing geodesic paths in the space of probability distributions, enriching the training dataset for the ROM. A key feature of our approach is its ability to provide a continuous representation of the solution's dynamics by exploiting a virtual-to-real time mapping. This enables the reconstruction of solutions at finer temporal scales than those provided by the original data. To further improve prediction accuracy, we employ Gaussian Process Regression to learn the residual and correct the representation between the interpolated snapshots and the physical solution. We demonstrate the effectiveness of our methodology with atmospheric mesoscale benchmarks characterized by highly nonlinear, advection-dominated dynamics. Our results show improved accuracy and efficiency in predicting complex system behaviors, indicating the potential of this approach for a wide range of applications in computational physics and engineering. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 508,000 |
2206.04756 | An Empirical Study on Disentanglement of Negative-free Contrastive
Learning | Negative-free contrastive learning methods have attracted a lot of attention with simplicity and impressive performances for large-scale pretraining. However, its disentanglement property remains unexplored. In this paper, we examine negative-free contrastive learning methods to study the disentanglement property empirically. We find that existing disentanglement metrics fail to make meaningful measurements for high-dimensional representation models, so we propose a new disentanglement metric based on Mutual Information between latent representations and data factors. With this proposed metric, we benchmark the disentanglement property of negative-free contrastive learning on both popular synthetic datasets and a real-world dataset CelebA. Our study shows that the investigated methods can learn a well-disentangled subset of representation. As far as we know, we are the first to extend the study of disentangled representation learning to high-dimensional representation space and introduce negative-free contrastive learning methods into this area. The source code of this paper is available at \url{https://github.com/noahcao/disentanglement_lib_med}. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 301,754 |
1811.03136 | Strategic Availability and Cost Effective UAV-based Flying Access
Networks: S-Modular Game Analysis | Telecommunication service providers deploy UAVs to provide flying network access in remote rural areas, disaster-affected areas or massive-attended events (sport venues, festivals, etc.) where full set-up to provide temporary wireless coverage would be very expensive. Of course, a UAV is battery-powered which means limited energy budget for both mobility aspect and communication aspect. An efficient solution is to allow UAVs switching their radio modules to sleep mode in order to extend battery lifetime. This results in temporary unavailability of communication feature. Within such a situation, the ultimate deal for a UAV operator is to provide a cost effective service with acceptable availability. This would allow to meet some target Quality of Service while having a good market share granting satisfactory benefits. We construct a duopoly model to capture the adversarial behavior of service providers in terms of their pricing policies and their respective availability probabilities. Optimal periodic beaconing (small messages advertising existence of a UAV) is a vital issue that needs to be addressed, given the UAVs limited battery capacity and their recharging constraints. A full analysis of the game outcome, both in terms of equilibrium pricing and equilibrium availability, is derived. We show that the availability-pricing game exhibits some nice features as it is sub-modular with respect to the availability policy, whereas it is super-modular with respect to the service fee. Furthermore, we implement a learning scheme using best-response dynamics that allows operators to learn their joint pricing-availability strategies in a fast, accurate and distributed fashion. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 112,760 |
2409.10176 | TCDformer-based Momentum Transfer Model for Long-term Sports Prediction | Accurate sports prediction is a crucial skill for professional coaches, which can assist in developing effective training strategies and scientific competition tactics. Traditional methods often use complex mathematical statistical techniques to boost predictability, but this often is limited by dataset scale and has difficulty handling long-term predictions with variable distributions, notably underperforming when predicting point-set-game multi-level matches. To deal with this challenge, this paper proposes TM2, a TCDformer-based Momentum Transfer Model for long-term sports prediction, which encompasses a momentum encoding module and a prediction module based on momentum transfer. TM2 initially encodes momentum in large-scale unstructured time series using the local linear scaling approximation (LLSA) module. Then it decomposes the reconstructed time series with momentum transfer into trend and seasonal components. The final prediction results are derived from the additive combination of a multilayer perceptron (MLP) for predicting trend components and wavelet attention mechanisms for seasonal components. Comprehensive experimental results show that on the 2023 Wimbledon men's tournament datasets, TM2 significantly surpasses existing sports prediction models in terms of performance, reducing MSE by 61.64% and MAE by 63.64%. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 488,649 |
2105.07484 | Leveraging Semantic Scene Characteristics and Multi-Stream Convolutional
Architectures in a Contextual Approach for Video-Based Visual Emotion
Recognition in the Wild | In this work we tackle the task of video-based visual emotion recognition in the wild. Standard methodologies that rely solely on the extraction of bodily and facial features often fall short of accurate emotion prediction in cases where the aforementioned sources of affective information are inaccessible due to head/body orientation, low resolution and poor illumination. We aspire to alleviate this problem by leveraging visual context in the form of scene characteristics and attributes, as part of a broader emotion recognition framework. Temporal Segment Networks (TSN) constitute the backbone of our proposed model. Apart from the RGB input modality, we make use of dense Optical Flow, following an intuitive multi-stream approach for a more effective encoding of motion. Furthermore, we shift our attention towards skeleton-based learning and leverage action-centric data as means of pre-training a Spatial-Temporal Graph Convolutional Network (ST-GCN) for the task of emotion recognition. Our extensive experiments on the challenging Body Language Dataset (BoLD) verify the superiority of our methods over existing approaches, while by properly incorporating all of the aforementioned modules in a network ensemble, we manage to surpass the previous best published recognition scores, by a large margin. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 235,447 |
cs/0612029 | A Classification of 6R Manipulators | This paper presents a classification of generic 6-revolute jointed (6R) manipulators using homotopy class of their critical point manifold. A part of classification is listed in this paper because of the complexity of homotopy class of 4-torus. The results of this classification will serve future research of the classification and topological properties of maniplators joint space and workspace. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 539,942 |
1906.05675 | Privacy-Preserving Deep Action Recognition: An Adversarial Learning
Framework and A New Dataset | We investigate privacy-preserving, video-based action recognition in deep learning, a problem with growing importance in smart camera applications. A novel adversarial training framework is formulated to learn an anonymization transform for input videos such that the trade-off between target utility task performance and the associated privacy budgets is explicitly optimized on the anonymized videos. Notably, the privacy budget, often defined and measured in task-driven contexts, cannot be reliably indicated using any single model performance because strong protection of privacy should sustain against any malicious model that tries to steal private information. To tackle this problem, we propose two new optimization strategies of model restarting and model ensemble to achieve stronger universal privacy protection against any attacker models. Extensive experiments have been carried out and analyzed. On the other hand, given few public datasets available with both utility and privacy labels, the data-driven (supervised) learning cannot exert its full power on this task. We first discuss an innovative heuristic of cross-dataset training and evaluation, enabling the use of multiple single-task datasets (one with target task labels and the other with privacy labels) in our problem. To further address this dataset challenge, we have constructed a new dataset, termed PA-HMDB51, with both target task labels (action) and selected privacy attributes (skin color, face, gender, nudity, and relationship) annotated on a per-frame basis. This first-of-its-kind video dataset and evaluation protocol can greatly facilitate visual privacy research and open up other opportunities. Our codes, models, and the PA-HMDB51 dataset are available at https://github.com/VITA-Group/PA-HMDB51. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 135,082 |
2305.16348 | Machine learning-based characterization of hydrochar from biomass:
Implications for sustainable energy and material production | Hydrothermal carbonization (HTC) is a process that converts biomass into versatile hydrochar without the need for prior drying. The physicochemical properties of hydrochar are influenced by biomass properties and processing parameters, making it challenging to optimize for specific applications through trial-and-error experiments. To save time and money, machine learning can be used to develop a model that characterizes hydrochar produced from different biomass sources under varying reaction processing parameters. Thus, this study aims to develop an inclusive model to characterize hydrochar using a database covering a range of biomass types and reaction processing parameters. The quality and quantity of hydrochar are predicted using two models (decision tree regression and support vector regression). The decision tree regression model outperforms the support vector regression model in terms of forecast accuracy (R2 > 0.88, RMSE < 6.848, and MAE < 4.718). Using an evolutionary algorithm, optimum inputs are identified based on cost functions provided by the selected model to optimize hydrochar for energy production, soil amendment, and pollutant adsorption, resulting in hydrochar yields of 84.31%, 84.91%, and 80.40%, respectively. The feature importance analysis reveals that biomass ash/carbon content and operating temperature are the primary factors affecting hydrochar production in the HTC process. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 368,041 |
2308.06411 | Dialogue Possibilities between a Human Supervisor and UAM Air Traffic
Management: Route Alteration | This paper introduces a novel approach to detour management in Urban Air Traffic Management (UATM) using knowledge representation and reasoning. It aims to understand the complexities and requirements of UAM detours, enabling a method that quickly identifies safe and efficient routes in a carefully sampled environment. This method implemented in Answer Set Programming uses non-monotonic reasoning and a two-phase conversation between a human manager and the UATM system, considering factors like safety and potential impacts. The robustness and efficacy of the proposed method were validated through several queries from two simulation scenarios, contributing to the symbiosis of human knowledge and advanced AI techniques. The paper provides an introduction, citing relevant studies, problem formulation, solution, discussions, and concluding comments. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 385,122 |
2108.04548 | Adaptive Beam Tracking based on Recurrent Neural Networks for mmWave
Channels | The performance of millimeter wave (mmWave) communications critically depends on the accuracy of beamforming both at base station (BS) and user terminals (UEs) due to high isotropic path-loss and channel attenuation. In high mobility environments, accurate beam alignment becomes even more challenging as the angles of the BS and each UE must be tracked reliably and continuously. In this work, focusing on the beamforming at the BS, we propose an adaptive method based on Recurrent Neural Networks (RNN) that tracks and predicts the Angle of Departure (AoD) of a given UE. Moreover, we propose a modified frame structure to reduce beam alignment overhead and hence increase the communication rate. Our numerical experiments in a highly non-linear mobility scenario show that our proposed method is able to track the AoD accurately and achieve higher communication rate compared to more traditional methods such as the particle filter. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 250,040 |
1905.00543 | Optimal Power Allocation for Minimizing Outage Probability of UAV Relay
Communications | Unmanned aerial vehicle (UAV) networks have grown rapidly in recent years and become attractive for various emergence communications scenarios. In this paper, we consider a UAV acting as a relay node to assist wireless transmissions from a base station (BS) to a ground user (GUs). A closed-form expression of outage probability for the BS-GUs transmission via UAV relaying is derived over Rician fading channels. We then formulate an optimization problem to minimize the outage probability of UAV relay communications with a constraint on the total transit power of the BS and GUs. It is proved that our formulated optimization problem is convex and an optimal power allocation solution is found for the outage probability minimization. Simulation results demonstrate that with an increasing power allocation factor, the outage probability initially decreases and then starts to increase, showing the existence of an optimal power allocation solution. Additionally, it is shown that the proposed optimal power allocation scheme significantly outperforms the conventional equal power allocation in terms of the outage probability. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 129,501 |
2408.14069 | Revisiting Vacuous Reduct Semantics for Abstract Argumentation (Extended
Version) | We consider the notion of a vacuous reduct semantics for abstract argumentation frameworks, which, given two abstract argumentation semantics {\sigma} and {\tau}, refines {\sigma} (base condition) by accepting only those {\sigma}-extensions that have no non-empty {\tau}-extension in their reduct (vacuity condition). We give a systematic overview on vacuous reduct semantics resulting from combining different admissibility-based and conflict-free semantics and present a principle-based analysis of vacuous reduct semantics in general. We provide criteria for the inheritance of principle satisfaction by a vacuous reduct semantics from its base and vacuity condition for established as well as recently introduced principles in the context of weak argumentation semantics. We also conduct a principle-based analysis for the special case of undisputed semantics. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 483,409 |
2412.04408 | Providing Differential Privacy for Federated Learning Over Wireless: A
Cross-layer Framework | Federated Learning (FL) is a distributed machine learning framework that inherently allows edge devices to maintain their local training data, thus providing some level of privacy. However, FL's model updates still pose a risk of privacy leakage, which must be mitigated. Over-the-air FL (OTA-FL) is an adapted FL design for wireless edge networks that leverages the natural superposition property of the wireless medium. We propose a wireless physical layer (PHY) design for OTA-FL which improves differential privacy (DP) through a decentralized, dynamic power control that utilizes both inherent Gaussian noise in the wireless channel and a cooperative jammer (CJ) for additional artificial noise generation when higher privacy levels are required. Although primarily implemented within the Upcycled-FL framework, where a resource-efficient method with first-order approximations is used at every even iteration to decrease the required information from clients, our power control strategy is applicable to any FL framework, including FedAvg and FedProx as shown in the paper. This adaptation showcases the flexibility and effectiveness of our design across different learning algorithms while maintaining a strong emphasis on privacy. Our design removes the need for client-side artificial noise injection for DP, utilizing a cooperative jammer to enhance privacy without affecting transmission efficiency for higher privacy demands. Privacy analysis is provided using the Moments Accountant method. We perform a convergence analysis for non-convex objectives to tackle heterogeneous data distributions, highlighting the inherent trade-offs between privacy and accuracy. Numerical results show that our approach with various FL algorithms outperforms the state-of-the-art under the same DP conditions on the non-i.i.d. FEMNIST dataset, and highlight the cooperative jammer's effectiveness in ensuring strict privacy. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 514,388 |
1906.01992 | Performance Modelling of Deep Learning on Intel Many Integrated Core
Architectures | Many complex problems, such as natural language processing or visual object detection, are solved using deep learning. However, efficient training of complex deep convolutional neural networks for large data sets is computationally demanding and requires parallel computing resources. In this paper, we present two parameterized performance models for estimation of execution time of training convolutional neural networks on the Intel many integrated core architecture. While for the first performance model we minimally use measurement techniques for parameter value estimation, in the second model we estimate more parameters based on measurements. We evaluate the prediction accuracy of performance models in the context of training three different convolutional neural network architectures on the Intel Xeon Phi. The achieved average performance prediction accuracy is about 15% for the first model and 11% for second model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 133,905 |
2403.01897 | Fostering the Ecosystem of Open Neural Encoders for Portuguese with
Albertina PT* Family | To foster the neural encoding of Portuguese, this paper contributes foundation encoder models that represent an expansion of the still very scarce ecosystem of large language models specifically developed for this language that are fully open, in the sense that they are open source and openly distributed for free under an open license for any purpose, thus including research and commercial usages. Like most languages other than English, Portuguese is low-resourced in terms of these foundational language resources, there being the inaugural 900 million parameter Albertina and 335 million Bertimbau. Taking this couple of models as an inaugural set, we present the extension of the ecosystem of state-of-the-art open encoders for Portuguese with a larger, top performance-driven model with 1.5 billion parameters, and a smaller, efficiency-driven model with 100 million parameters. While achieving this primary goal, further results that are relevant for this ecosystem were obtained as well, namely new datasets for Portuguese based on the SuperGLUE benchmark, which we also distribute openly. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 434,627 |
1604.00494 | A Fully Convolutional Neural Network for Cardiac Segmentation in
Short-Axis MRI | Automated cardiac segmentation from magnetic resonance imaging datasets is an essential step in the timely diagnosis and management of cardiac pathologies. We propose to tackle the problem of automated left and right ventricle segmentation through the application of a deep fully convolutional neural network architecture. Our model is efficiently trained end-to-end in a single learning stage from whole-image inputs and ground truths to make inference at every pixel. To our knowledge, this is the first application of a fully convolutional neural network architecture for pixel-wise labeling in cardiac magnetic resonance imaging. Numerical experiments demonstrate that our model is robust to outperform previous fully automated methods across multiple evaluation measures on a range of cardiac datasets. Moreover, our model is fast and can leverage commodity compute resources such as the graphics processing unit to enable state-of-the-art cardiac segmentation at massive scales. The models and code are available at https://github.com/vuptran/cardiac-segmentation | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 54,039 |
2501.05828 | UltraRay: Full-Path Ray Tracing for Enhancing Realism in Ultrasound
Simulation | Traditional ultrasound simulators solve the wave equation to model pressure distribution fields, achieving high accuracy but requiring significant computational time and resources. To address this, ray tracing approaches have been introduced, modeling wave propagation as rays interacting with boundaries and scatterers. However, existing models simplify ray propagation, generating echoes at interaction points without considering return paths to the sensor. This can result in unrealistic artifacts and necessitates careful scene tuning for plausible results. We propose a novel ultrasound simulation pipeline that utilizes a ray tracing algorithm to generate echo data, tracing each ray from the transducer through the scene and back to the sensor. To replicate advanced ultrasound imaging, we introduce a ray emission scheme optimized for plane wave imaging, incorporating delay and steering capabilities. Furthermore, we integrate a standard signal processing pipeline to simulate end-to-end ultrasound image formation. We showcase the efficacy of the proposed pipeline by modeling synthetic scenes featuring highly reflective objects, such as bones. In doing so, our proposed approach, UltraRay, not only enhances the overall visual quality but also improves the realism of the simulated images by accurately capturing secondary reflections and reducing unnatural artifacts. By building on top of a differentiable framework, the proposed pipeline lays the groundwork for a fast and differentiable ultrasound simulation tool necessary for gradient-based optimization, enabling advanced ultrasound beamforming strategies, neural network integration, and accurate inverse scene reconstruction. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 523,740 |
2412.20537 | Diminishing Return of Value Expansion Methods | Model-based reinforcement learning aims to increase sample efficiency, but the accuracy of dynamics models and the resulting compounding errors are often seen as key limitations. This paper empirically investigates potential sample efficiency gains from improved dynamics models in model-based value expansion methods. Our study reveals two key findings when using oracle dynamics models to eliminate compounding errors. First, longer rollout horizons enhance sample efficiency, but the improvements quickly diminish with each additional expansion step. Second, increased model accuracy only marginally improves sample efficiency compared to learned models with identical horizons. These diminishing returns in sample efficiency are particularly noteworthy when compared to model-free value expansion methods. These model-free algorithms achieve comparable performance without the computational overhead. Our results suggest that the limitation of model-based value expansion methods cannot be attributed to model accuracy. Although higher accuracy is beneficial, even perfect models do not provide unrivaled sample efficiency. Therefore, the bottleneck exists elsewhere. These results challenge the common assumption that model accuracy is the primary constraint in model-based reinforcement learning. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 521,259 |
1609.05152 | Style Imitation and Chord Invention in Polyphonic Music with Exponential
Families | Modeling polyphonic music is a particularly challenging task because of the intricate interplay between melody and harmony. A good model should satisfy three requirements: statistical accuracy (capturing faithfully the statistics of correlations at various ranges, horizontally and vertically), flexibility (coping with arbitrary user constraints), and generalization capacity (inventing new material, while staying in the style of the training corpus). Models proposed so far fail on at least one of these requirements. We propose a statistical model of polyphonic music, based on the maximum entropy principle. This model is able to learn and reproduce pairwise statistics between neighboring note events in a given corpus. The model is also able to invent new chords and to harmonize unknown melodies. We evaluate the invention capacity of the model by assessing the amount of cited, re-discovered, and invented chords on a corpus of Bach chorales. We discuss how the model enables the user to specify and enforce user-defined constraints, which makes it useful for style-based, interactive music generation. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 61,085 |
1411.0997 | Iterated geometric harmonics for data imputation and reconstruction of
missing data | The method of geometric harmonics is adapted to the situation of incomplete data by means of the iterated geometric harmonics (IGH) scheme. The method is tested on natural and synthetic data sets with 50--500 data points and dimensionality of 400--10,000. Experiments suggest that the algorithm converges to a near optimal solution within 4--6 iterations, at runtimes of less than 30 minutes on a medium-grade desktop computer. The imputation of missing data values is applied to collections of damaged images (suffering from data annihilation rates of up to 70\%) which are reconstructed with a surprising degree of accuracy. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 37,296 |
2201.09419 | Automated machine learning for secure key rate in discrete-modulated
continuous-variable quantum key distribution | Continuous-variable quantum key distribution (CV QKD) with discrete modulation has attracted increasing attention due to its experimental simplicity, lower-cost implementation and compatibility with classical optical communication. Correspondingly, some novel numerical methods have been proposed to analyze the security of these protocols against collective attacks, which promotes key rates over one hundred kilometers of fiber distance. However, numerical methods are limited by their calculation time and resource consumption, for which they cannot play more roles on mobile platforms in quantum networks. To improve this issue, a neural network model predicting key rates in nearly real time has been proposed previously. Here, we go further and show a neural network model combined with Bayesian optimization. This model automatically designs the best architecture of neural network computing key rates in real time. We demonstrate our model with two variants of CV QKD protocols with quaternary modulation. The results show high reliability with secure probability as high as $99.15\%-99.59\%$, considerable tightness and high efficiency with speedup of approximately $10^7$ in both cases. This inspiring model enables the real-time computation of unstructured quantum key distribution protocols' key rate more automatically and efficiently, which has met the growing needs of implementing QKD protocols on moving platforms. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 276,669 |
1805.05974 | Crick-net: A Convolutional Neural Network based Classification Approach
for Detecting Waist High No Balls in Cricket | Cricket is undoubtedly one of the most popular games in this modern era. As human beings are prone to error, there remains a constant need for automated analysis and decision making of different events in this game. Simultaneously, with advent and advances in Artificial Intelligence and Computer Vision, application of these two in different domains has become an emerging trend. Applying several computer vision techniques in analyzing different Cricket events and automatically coming into decisions has become popular in recent days. In this paper, we have deployed a CNN based classification method with Inception V3 in order to automatically detect and differentiate waist high no balls with fair balls. Our approach achieves an overall average accuracy of 88% with a fairly low cross-entropy value. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 97,511 |
2009.10312 | Stacked Generalization for Human Activity Recognition | This short paper aims to discuss the effectiveness and performance of classical machine learning approaches for Human Activity Recognition (HAR). It proposes two important models - Extra Trees and Stacked Classifier with the emphasize on the best practices, heuristics and measures that are required to maximize the performance of those models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 196,862 |
1309.1829 | On the $k$-error linear complexity for $2^n$-periodic binary sequences
via Cube Theory | The linear complexity and k-error linear complexity of a sequence have been used as important measures of keystream strength, hence designing a sequence with high linear complexity and $k$-error linear complexity is a popular research topic in cryptography. In this paper, the concept of stable $k$-error linear complexity is proposed to study sequences with stable and large $k$-error linear complexity. In order to study k-error linear complexity of binary sequences with period $2^n$, a new tool called cube theory is developed. By using the cube theory, one can easily construct sequences with the maximum stable $k$-error linear complexity. For such purpose, we first prove that a binary sequence with period $2^n$ can be decomposed into some disjoint cubes and further give a general decomposition approach. Second, it is proved that the maximum $k$-error linear complexity is $2^n-(2^l-1)$ over all $2^n$-periodic binary sequences, where $2^{l-1}\le k<2^{l}$. Thirdly, a characterization is presented about the $t$th ($t>1$) decrease in the $k$-error linear complexity for a $2^n$-periodic binary sequence $s$ and this is a continuation of Kurosawa et al. recent work for the first decrease of k-error linear complexity. Finally, A counting formula for $m$-cubes with the same linear complexity is derived, which is equivalent to the counting formula for $k$-error vectors. The counting formula of $2^n$-periodic binary sequences which can be decomposed into more than one cube is also investigated, which extends an important result by Etzion et al.. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 26,900 |
2410.13131 | Study of Weighted Residual Layered Belief Propagation for Decoding of
LDPC Codes | In this work, we investigate the decoding of Low-Density Parity-Check (LDPC) codes using informed dynamic scheduling algorithms that require a reduced number of iterations. In particular, we devise the weighted residual layered belief propagation (WR-LBP) decoding algorithm, which exploits the residual within a structured layer framework to speed the number of required decoding iterations. The proposed WR-LBP algorithm is assessed against important LDPC decoding algorithms, in terms of the number of iterations required for convergence and the bit error rates. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 499,384 |
2105.10912 | CiteWorth: Cite-Worthiness Detection for Improved Scientific Document
Understanding | Scientific document understanding is challenging as the data is highly domain specific and diverse. However, datasets for tasks with scientific text require expensive manual annotation and tend to be small and limited to only one or a few fields. At the same time, scientific documents contain many potential training signals, such as citations, which can be used to build large labelled datasets. Given this, we present an in-depth study of cite-worthiness detection in English, where a sentence is labelled for whether or not it cites an external source. To accomplish this, we introduce CiteWorth, a large, contextualized, rigorously cleaned labelled dataset for cite-worthiness detection built from a massive corpus of extracted plain-text scientific documents. We show that CiteWorth is high-quality, challenging, and suitable for studying problems such as domain adaptation. Our best performing cite-worthiness detection model is a paragraph-level contextualized sentence labelling model based on Longformer, exhibiting a 5 F1 point improvement over SciBERT which considers only individual sentences. Finally, we demonstrate that language model fine-tuning with cite-worthiness as a secondary task leads to improved performance on downstream scientific document understanding tasks. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | 236,540 |
2110.11943 | Solving N-player dynamic routing games with congestion: a mean field
approach | The recent emergence of navigational tools has changed traffic patterns and has now enabled new types of congestion-aware routing control like dynamic road pricing. Using the fundamental diagram of traffic flows - applied in macroscopic and mesoscopic traffic modeling - the article introduces a new N-player dynamic routing game with explicit congestion dynamics. The model is well-posed and can reproduce heterogeneous departure times and congestion spill back phenomena. However, as Nash equilibrium computations are PPAD-complete, solving the game becomes intractable for large but realistic numbers of vehicles N. Therefore, the corresponding mean field game is also introduced. Experiments were performed on several classical benchmark networks of the traffic community: the Pigou, Braess, and Sioux Falls networks with heterogeneous origin, destination and departure time tuples. The Pigou and the Braess examples reveal that the mean field approximation is generally very accurate and computationally efficient as soon as the number of vehicles exceeds a few dozen. On the Sioux Falls network (76 links, 100 time steps), this approach enables learning traffic dynamics with more than 14,000 vehicles. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | true | 262,659 |
2402.09923 | A Dataset of Open-Domain Question Answering with Multiple-Span Answers | Multi-span answer extraction, also known as the task of multi-span question answering (MSQA), is critical for real-world applications, as it requires extracting multiple pieces of information from a text to answer complex questions. Despite the active studies and rapid progress in English MSQA research, there is a notable lack of publicly available MSQA benchmark in Chinese. Previous efforts for constructing MSQA datasets predominantly emphasized entity-centric contextualization, resulting in a bias towards collecting factoid questions and potentially overlooking questions requiring more detailed descriptive responses. To overcome these limitations, we present CLEAN, a comprehensive Chinese multi-span question answering dataset that involves a wide range of open-domain subjects with a substantial number of instances requiring descriptive answers. Additionally, we provide established models from relevant literature as baselines for CLEAN. Experimental results and analysis show the characteristics and challenge of the newly proposed CLEAN dataset for the community. Our dataset, CLEAN, will be publicly released at zhiyiluo.site/misc/clean_v1.0_ sample.json. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 429,734 |
2311.09213 | GENEVA: GENErating and Visualizing branching narratives using LLMs | Dialogue-based Role Playing Games (RPGs) require powerful storytelling. The narratives of these may take years to write and typically involve a large creative team. In this work, we demonstrate the potential of large generative text models to assist this process. \textbf{GENEVA}, a prototype tool, generates a rich narrative graph with branching and reconverging storylines that match a high-level narrative description and constraints provided by the designer. A large language model (LLM), GPT-4, is used to generate the branching narrative and to render it in a graph format in a two-step process. We illustrate the use of GENEVA in generating new branching narratives for four well-known stories under different contextual constraints. This tool has the potential to assist in game development, simulations, and other applications with game-like properties. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 408,041 |
2212.09928 | Improving the Robustness of Summarization Models by Detecting and
Removing Input Noise | The evaluation of abstractive summarization models typically uses test data that is identically distributed as training data. In real-world practice, documents to be summarized may contain input noise caused by text extraction artifacts or data pipeline bugs. The robustness of model performance under distribution shift caused by such noise is relatively under-studied. We present a large empirical study quantifying the sometimes severe loss in performance (up to 12 ROUGE-1 points) from different types of input noise for a range of datasets and model sizes. We then propose a light-weight method for detecting and removing such noise in the input during model inference without requiring any extra training, auxiliary models, or even prior knowledge of the type of noise. Our proposed approach effectively mitigates the loss in performance, recovering a large fraction of the performance drop, sometimes as large as 11 ROUGE-1 points. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 337,258 |
2101.02381 | Boundary-Aware Geometric Encoding for Semantic Segmentation of Point
Clouds | Boundary information plays a significant role in 2D image segmentation, while usually being ignored in 3D point cloud segmentation where ambiguous features might be generated in feature extraction, leading to misclassification in the transition area between two objects. In this paper, firstly, we propose a Boundary Prediction Module (BPM) to predict boundary points. Based on the predicted boundary, a boundary-aware Geometric Encoding Module (GEM) is designed to encode geometric information and aggregate features with discrimination in a neighborhood, so that the local features belonging to different categories will not be polluted by each other. To provide extra geometric information for boundary-aware GEM, we also propose a light-weight Geometric Convolution Operation (GCO), making the extracted features more distinguishing. Built upon the boundary-aware GEM, we build our network and test it on benchmarks like ScanNet v2, S3DIS. Results show our methods can significantly improve the baseline and achieve state-of-the-art performance. Code is available at https://github.com/JchenXu/BoundaryAwareGEM. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 214,606 |
2501.18826 | Structural Embedding Projection for Contextual Large Language Model
Inference | Structured embedding transformations offer a promising approach for enhancing the efficiency and coherence of language model inference. The introduction of Structural Embedding Projection (SEP) provides a mechanism for refining token representations through projection matrices that integrate hierarchical and relational dependencies. The mathematical formulation of SEP enables embedding spaces to capture structured contextual relationships, thereby improving semantic fidelity without significantly increasing computational overhead. Experimental evaluations conducted on a range of linguistic datasets revealed that SEP contributed to reductions in perplexity and enhanced contextual coherence, demonstrating its potential to refine language model outputs. Computational efficiency assessments highlighted variations across different datasets, suggesting that the integration of structured embeddings introduced dataset-dependent trade-offs between inference speed and representational richness. The qualitative analysis of generated responses indicated that SEP enhanced narrative consistency and topic alignment, leading to improved fluency in multi-sentence text generation. The modifications to embedding layers required precise optimization to ensure stable training dynamics, as the introduction of structured transformations altered the traditional representation-learning process. The architectural adjustments necessary for SEP implementation influenced inference latency and memory consumption, requiring a balance between efficiency gains and additional processing demands. The impact of SEP on lexical diversity suggested that embedding modifications influenced the model's vocabulary usage, reflecting a more context-aware selection of generated tokens. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 528,876 |
2405.09215 | Xmodel-VLM: A Simple Baseline for Multimodal Vision Language Model | We introduce Xmodel-VLM, a cutting-edge multimodal vision language model. It is designed for efficient deployment on consumer GPU servers. Our work directly confronts a pivotal industry issue by grappling with the prohibitive service costs that hinder the broad adoption of large-scale multimodal systems. Through rigorous training, we have developed a 1B-scale language model from the ground up, employing the LLaVA paradigm for modal alignment. The result, which we call Xmodel-VLM, is a lightweight yet powerful multimodal vision language model. Extensive testing across numerous classic multimodal benchmarks has revealed that despite its smaller size and faster execution, Xmodel-VLM delivers performance comparable to that of larger models. Our model checkpoints and code are publicly available on GitHub at https://github.com/XiaoduoAILab/XmodelVLM. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 454,330 |
2004.12905 | Knowledge Base Completion for Constructing Problem-Oriented Medical
Records | Both electronic health records and personal health records are typically organized by data type, with medical problems, medications, procedures, and laboratory results chronologically sorted in separate areas of the chart. As a result, it can be difficult to find all of the relevant information for answering a clinical question about a given medical problem. A promising alternative is to instead organize by problems, with related medications, procedures, and other pertinent information all grouped together. A recent effort by Buchanan (2017) manually defined, through expert consensus, 11 medical problems and the relevant labs and medications for each. We show how to use machine learning on electronic health records to instead automatically construct these problem-based groupings of relevant medications, procedures, and laboratory tests. We formulate the learning task as one of knowledge base completion, and annotate a dataset that expands the set of problems from 11 to 32. We develop a model architecture that exploits both pre-trained concept embeddings and usage data relating the concepts contained in a longitudinal dataset from a large health system. We evaluate our algorithms' ability to suggest relevant medications, procedures, and lab tests, and find that the approach provides feasible suggestions even for problems that are hidden during training. The dataset, along with code to reproduce our results, is available at https://github.com/asappresearch/kbc-pomr. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 174,384 |
2404.01936 | Settling Time vs. Accuracy Tradeoffs for Clustering Big Data | We study the theoretical and practical runtime limits of k-means and k-median clustering on large datasets. Since effectively all clustering methods are slower than the time it takes to read the dataset, the fastest approach is to quickly compress the data and perform the clustering on the compressed representation. Unfortunately, there is no universal best choice for compressing the number of points - while random sampling runs in sublinear time and coresets provide theoretical guarantees, the former does not enforce accuracy while the latter is too slow as the numbers of points and clusters grow. Indeed, it has been conjectured that any sensitivity-based coreset construction requires super-linear time in the dataset size. We examine this relationship by first showing that there does exist an algorithm that obtains coresets via sensitivity sampling in effectively linear time - within log-factors of the time it takes to read the data. Any approach that significantly improves on this must then resort to practical heuristics, leading us to consider the spectrum of sampling strategies across both real and artificial datasets in the static and streaming settings. Through this, we show the conditions in which coresets are necessary for preserving cluster validity as well as the settings in which faster, cruder sampling strategies are sufficient. As a result, we provide a comprehensive theoretical and practical blueprint for effective clustering regardless of data size. Our code is publicly available and has scripts to recreate the experiments. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 443,653 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.