id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2312.01623 | Universal Segmentation at Arbitrary Granularity with Language
Instruction | This paper aims to achieve universal segmentation of arbitrary semantic level. Despite significant progress in recent years, specialist segmentation approaches are limited to specific tasks and data distribution. Retraining a new model for adaptation to new scenarios or settings takes expensive computation and time cost, which raises the demand for versatile and universal segmentation model that can cater to various granularity. Although some attempts have been made for unifying different segmentation tasks or generalization to various scenarios, limitations in the definition of paradigms and input-output spaces make it difficult for them to achieve accurate understanding of content at arbitrary granularity. To this end, we present UniLSeg, a universal segmentation model that can perform segmentation at any semantic level with the guidance of language instructions. For training UniLSeg, we reorganize a group of tasks from original diverse distributions into a unified data format, where images with texts describing segmentation targets as input and corresponding masks are output. Combined with a automatic annotation engine for utilizing numerous unlabeled data, UniLSeg achieves excellent performance on various tasks and settings, surpassing both specialist and unified segmentation models. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 412,518 |
2208.10221 | Dynamic Adaptive Threshold based Learning for Noisy Annotations Robust
Facial Expression Recognition | The real-world facial expression recognition (FER) datasets suffer from noisy annotations due to crowd-sourcing, ambiguity in expressions, the subjectivity of annotators and inter-class similarity. However, the recent deep networks have strong capacity to memorize the noisy annotations leading to corrupted feature embedding and poor generalization. To handle noisy annotations, we propose a dynamic FER learning framework (DNFER) in which clean samples are selected based on dynamic class specific threshold during training. Specifically, DNFER is based on supervised training using selected clean samples and unsupervised consistent training using all the samples. During training, the mean posterior class probabilities of each mini-batch is used as dynamic class-specific threshold to select the clean samples for supervised training. This threshold is independent of noise rate and does not need any clean data unlike other methods. In addition, to learn from all samples, the posterior distributions between weakly-augmented image and strongly-augmented image are aligned using an unsupervised consistency loss. We demonstrate the robustness of DNFER on both synthetic as well as on real noisy annotated FER datasets like RAFDB, FERPlus, SFEW and AffectNet. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 313,970 |
1503.06692 | Persistence in voting behavior: stronghold dynamics in elections | Influence among individuals is at the core of collective social phenomena such as the dissemination of ideas, beliefs or behaviors, social learning and the diffusion of innovations. Different mechanisms have been proposed to implement inter-agent influence in social models from the voter model, to majority rules, to the Granoveter model. Here we advance in this direction by confronting the recently introduced Social Influence and Recurrent Mobility (SIRM) model, that reproduces generic features of vote-shares at different geographical levels, with data in the US presidential elections. Our approach incorporates spatial and population diversity as inputs for the opinion dynamics while individuals' mobility provides a proxy for social context, and peer imitation accounts for social influence. The model captures the observed stationary background fluctuations in the vote-shares across counties. We study the so-called political strongholds, i.e., locations where the votes-shares for a party are systematically higher than average. A quantitative definition of a stronghold by means of persistence in time of fluctuations in the voting spatial distribution is introduced, and results from the US Presidential Elections during the period 1980-2012 are analyzed within this framework. We compare electoral results with simulations obtained with the SIRM model finding a good agreement both in terms of the number and the location of strongholds. The strongholds duration is also systematically characterized in the SIRM model. The results compare well with the electoral results data revealing an exponential decay in the persistence of the strongholds with time. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 41,391 |
1409.4276 | A Fast Quartet Tree Heuristic for Hierarchical Clustering | The Minimum Quartet Tree Cost problem is to construct an optimal weight tree from the $3{n \choose 4}$ weighted quartet topologies on $n$ objects, where optimality means that the summed weight of the embedded quartet topologies is optimal (so it can be the case that the optimal tree embeds all quartets as nonoptimal topologies). We present a Monte Carlo heuristic, based on randomized hill climbing, for approximating the optimal weight tree, given the quartet topology weights. The method repeatedly transforms a dendrogram, with all objects involved as leaves, achieving a monotonic approximation to the exact single globally optimal tree. The problem and the solution heuristic has been extensively used for general hierarchical clustering of nontree-like (non-phylogeny) data in various domains and across domains with heterogeneous data. We also present a greatly improved heuristic, reducing the running time by a factor of order a thousand to ten thousand. All this is implemented and available, as part of the CompLearn package. We compare performance and running time of the original and improved versions with those of UPGMA, BioNJ, and NJ, as implemented in the SplitsTree package on genomic data for which the latter are optimized. Keywords: Data and knowledge visualization, Pattern matching--Clustering--Algorithms/Similarity measures, Hierarchical clustering, Global optimization, Quartet tree, Randomized hill-climbing, | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 36,056 |
2406.12638 | Efficient and Long-Tailed Generalization for Pre-trained Vision-Language
Model | Pre-trained vision-language models like CLIP have shown powerful zero-shot inference ability via image-text matching and prove to be strong few-shot learners in various downstream tasks. However, in real-world scenarios, adapting CLIP to downstream tasks may encounter the following challenges: 1) data may exhibit long-tailed data distributions and might not have abundant samples for all the classes; 2) There might be emerging tasks with new classes that contain no samples at all. To overcome them, we propose a novel framework to achieve efficient and long-tailed generalization, which can be termed as Candle. During the training process, we propose compensating logit-adjusted loss to encourage large margins of prototypes and alleviate imbalance both within the base classes and between the base and new classes. For efficient adaptation, we treat the CLIP model as a black box and leverage the extracted features to obtain visual and textual prototypes for prediction. To make full use of multi-modal information, we also propose cross-modal attention to enrich the features from both modalities. For effective generalization, we introduce virtual prototypes for new classes to make up for their lack of training images. Candle achieves state-of-the-art performance over extensive experiments on 11 diverse datasets while substantially reducing the training time, demonstrating the superiority of our approach. The source code is available at https://github.com/shijxcs/Candle. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 465,500 |
1905.05285 | Nearest Neighbor and Kernel Survival Analysis: Nonasymptotic Error
Bounds and Strong Consistency Rates | We establish the first nonasymptotic error bounds for Kaplan-Meier-based nearest neighbor and kernel survival probability estimators where feature vectors reside in metric spaces. Our bounds imply rates of strong consistency for these nonparametric estimators and, up to a log factor, match an existing lower bound for conditional CDF estimation. Our proof strategy also yields nonasymptotic guarantees for nearest neighbor and kernel variants of the Nelson-Aalen cumulative hazards estimator. We experimentally compare these methods on four datasets. We find that for the kernel survival estimator, a good choice of kernel is one learned using random survival forests. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 130,675 |
2210.02092 | Functional Central Limit Theorem and Strong Law of Large Numbers for
Stochastic Gradient Langevin Dynamics | We study the mixing properties of an important optimization algorithm of machine learning: the stochastic gradient Langevin dynamics (SGLD) with a fixed step size. The data stream is not assumed to be independent hence the SGLD is not a Markov chain, merely a \emph{Markov chain in a random environment}, which complicates the mathematical treatment considerably. We derive a strong law of large numbers and a functional central limit theorem for SGLD. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 321,525 |
1806.04808 | Learning Representations of Ultrahigh-dimensional Data for Random
Distance-based Outlier Detection | Learning expressive low-dimensional representations of ultrahigh-dimensional data, e.g., data with thousands/millions of features, has been a major way to enable learning methods to address the curse of dimensionality. However, existing unsupervised representation learning methods mainly focus on preserving the data regularity information and learning the representations independently of subsequent outlier detection methods, which can result in suboptimal and unstable performance of detecting irregularities (i.e., outliers). This paper introduces a ranking model-based framework, called RAMODO, to address this issue. RAMODO unifies representation learning and outlier detection to learn low-dimensional representations that are tailored for a state-of-the-art outlier detection approach - the random distance-based approach. This customized learning yields more optimal and stable representations for the targeted outlier detectors. Additionally, RAMODO can leverage little labeled data as prior knowledge to learn more expressive and application-relevant representations. We instantiate RAMODO to an efficient method called REPEN to demonstrate the performance of RAMODO. Extensive empirical results on eight real-world ultrahigh dimensional data sets show that REPEN (i) enables a random distance-based detector to obtain significantly better AUC performance and two orders of magnitude speedup; (ii) performs substantially better and more stably than four state-of-the-art representation learning methods; and (iii) leverages less than 1% labeled data to achieve up to 32% AUC improvement. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | true | false | 100,320 |
1301.6522 | Optimal Nonstationary Reproduction Distribution for Nonanticipative RDF
on Abstract Alphabets | In this paper we introduce a definition for nonanticipative Rate Distortion Function (RDF) on abstract alphabets, and we invoke weak convergence of probability measures to show various of its properties, such as, existence of the optimal reproduction conditional distribution, compactness of the fidelity set, lower semicontinuity of the RDF functional, etc. Further, we derive the closed form expression of the optimal nonstationary reproduction distribution. This expression is computed recursively backward in time. Throughout the paper we point out an operational meaning of the nonanticipative RDF by recalling the coding theorem derive in \cite{tatikonda2000}, and we state relations to Gorbunov-Pinsker's nonanticipatory $\epsilon-$entropy \cite{gorbunov-pinsker}. | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | 21,452 |
2103.09141 | Simultaneous Multi-View Camera Pose Estimation and Object Tracking with
Square Planar Markers | Object tracking is a key aspect in many applications such as augmented reality in medicine (e.g. tracking a surgical instrument) or robotics. Squared planar markers have become popular tools for tracking since their pose can be estimated from their four corners. While using a single marker and a single camera limits the working area considerably, using multiple markers attached to an object requires estimating their relative position, which is not trivial, for high accuracy tracking. Likewise, using multiple cameras requires estimating their extrinsic parameters, also a tedious process that must be repeated whenever a camera is moved. This work proposes a novel method to simultaneously solve the above-mentioned problems. From a video sequence showing a rigid set of planar markers recorded from multiple cameras, the proposed method is able to automatically obtain the three-dimensional configuration of the markers, the extrinsic parameters of the cameras, and the relative pose between the markers and the cameras at each frame. Our experiments show that our approach can obtain highly accurate results for estimating these parameters using low resolution cameras. Once the parameters are obtained, tracking of the object can be done in real time with a low computational cost. The proposed method is a step forward in the development of cost-effective solutions for object tracking. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 225,091 |
2306.00800 | FigGen: Text to Scientific Figure Generation | The generative modeling landscape has experienced tremendous growth in recent years, particularly in generating natural images and art. Recent techniques have shown impressive potential in creating complex visual compositions while delivering impressive realism and quality. However, state-of-the-art methods have been focusing on the narrow domain of natural images, while other distributions remain unexplored. In this paper, we introduce the problem of text-to-figure generation, that is creating scientific figures of papers from text descriptions. We present FigGen, a diffusion-based approach for text-to-figure as well as the main challenges of the proposed task. Code and models are available at https://github.com/joanrod/figure-diffusion | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 370,166 |
2109.14502 | Untangling Braids with Multi-agent Q-Learning | We use reinforcement learning to tackle the problem of untangling braids. We experiment with braids with 2 and 3 strands. Two competing players learn to tangle and untangle a braid. We interface the braid untangling problem with the OpenAI Gym environment, a widely used way of connecting agents to reinforcement learning problems. The results provide evidence that the more we train the system, the better the untangling player gets at untangling braids. At the same time, our tangling player produces good examples of tangled braids. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 257,983 |
2205.09360 | Evaluating Subtitle Segmentation for End-to-end Generation Systems | Subtitles appear on screen as short pieces of text, segmented based on formal constraints (length) and syntactic/semantic criteria. Subtitle segmentation can be evaluated with sequence segmentation metrics against a human reference. However, standard segmentation metrics cannot be applied when systems generate outputs different than the reference, e.g. with end-to-end subtitling systems. In this paper, we study ways to conduct reference-based evaluations of segmentation accuracy irrespective of the textual content. We first conduct a systematic analysis of existing metrics for evaluating subtitle segmentation. We then introduce $Sigma$, a new Subtitle Segmentation Score derived from an approximate upper-bound of BLEU on segmentation boundaries, which allows us to disentangle the effect of good segmentation from text quality. To compare $Sigma$ with existing metrics, we further propose a boundary projection method from imperfect hypotheses to the true reference. Results show that all metrics are able to reward high quality output but for similar outputs system ranking depends on each metric's sensitivity to error type. Our thorough analyses suggest $Sigma$ is a promising segmentation candidate but its reliability over other segmentation metrics remains to be validated through correlations with human judgements. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 297,242 |
2411.05137 | Inclusion in Assistive Haircare Robotics: Practical and Ethical
Considerations in Hair Manipulation | Robot haircare systems could provide a controlled and personalized environment that is respectful of an individual's sensitivities and may offer a comfortable experience. We argue that because of hair and hairstyles' often unique importance in defining and expressing an individual's identity, we should approach the development of assistive robot haircare systems carefully while considering various practical and ethical concerns and risks. In this work, we specifically list and discuss the consideration of hair type, expression of the individual's preferred identity, cost accessibility of the system, culturally-aware robot strategies, and the associated societal risks. Finally, we discuss the planned studies that will allow us to better understand and address the concerns and considerations we outlined in this work through interactions with both haircare experts and end-users. Through these practical and ethical considerations, this work seeks to systematically organize and provide guidance for the development of inclusive and ethical robot haircare systems. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 506,556 |
2110.09006 | Natural Image Reconstruction from fMRI using Deep Learning: A Survey | With the advent of brain imaging techniques and machine learning tools, much effort has been devoted to building computational models to capture the encoding of visual information in the human brain. One of the most challenging brain decoding tasks is the accurate reconstruction of the perceived natural images from brain activities measured by functional magnetic resonance imaging (fMRI). In this work, we survey the most recent deep learning methods for natural image reconstruction from fMRI. We examine these methods in terms of architectural design, benchmark datasets, and evaluation metrics and present a fair performance evaluation across standardized evaluation metrics. Finally, we discuss the strengths and limitations of existing studies and present potential future directions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 261,645 |
2402.02335 | Video Editing for Video Retrieval | Though pre-training vision-language models have demonstrated significant benefits in boosting video-text retrieval performance from large-scale web videos, fine-tuning still plays a critical role with manually annotated clips with start and end times, which requires considerable human effort. To address this issue, we explore an alternative cheaper source of annotations, single timestamps, for video-text retrieval. We initialise clips from timestamps in a heuristic way to warm up a retrieval model. Then a video clip editing method is proposed to refine the initial rough boundaries to improve retrieval performance. A student-teacher network is introduced for video clip editing. The teacher model is employed to edit the clips in the training set whereas the student model trains on the edited clips. The teacher weights are updated from the student's after the student's performance increases. Our method is model agnostic and applicable to any retrieval models. We conduct experiments based on three state-of-the-art retrieval models, COOT, VideoCLIP and CLIP4Clip. Experiments conducted on three video retrieval datasets, YouCook2, DiDeMo and ActivityNet-Captions show that our edited clips consistently improve retrieval performance over initial clips across all the three retrieval models. | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | 426,501 |
1908.03995 | Temporally Discounted Differential Privacy for Evolving Datasets on an
Infinite Horizon | We define discounted differential privacy, as an alternative to (conventional) differential privacy, to investigate privacy of evolving datasets, containing time series over an unbounded horizon. We use privacy loss as a measure of the amount of information leaked by the reports at a certain fixed time. We observe that privacy losses are weighted equally across time in the definition of differential privacy, and therefore the magnitude of privacy-preserving additive noise must grow without bound to ensure differential privacy over an infinite horizon. Motivated by the discounted utility theory within the economics literature, we use exponential and hyperbolic discounting of privacy losses across time to relax the definition of differential privacy under continual observations. This implies that privacy losses in distant past are less important than the current ones to an individual. We use discounted differential privacy to investigate privacy of evolving datasets using additive Laplace noise and show that the magnitude of the additive noise can remain bounded under discounted differential privacy. We illustrate the quality of privacy-preserving mechanisms satisfying discounted differential privacy on smart-meter measurement time-series of real households, made publicly available by Ausgrid (an Australian electricity distribution company). | false | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | false | 141,373 |
2211.04194 | Submission-Aware Reviewer Profiling for Reviewer Recommender System | Assigning qualified, unbiased and interested reviewers to paper submissions is vital for maintaining the integrity and quality of the academic publishing system and providing valuable reviews to authors. However, matching thousands of submissions with thousands of potential reviewers within a limited time is a daunting challenge for a conference program committee. Prior efforts based on topic modeling have suffered from losing the specific context that help define the topics in a publication or submission abstract. Moreover, in some cases, topics identified are difficult to interpret. We propose an approach that learns from each abstract published by a potential reviewer the topics studied and the explicit context in which the reviewer studied the topics. Furthermore, we contribute a new dataset for evaluating reviewer matching systems. Our experiments show a significant, consistent improvement in precision when compared with the existing methods. We also use examples to demonstrate why our recommendations are more explainable. The new approach has been deployed successfully at top-tier conferences in the last two years. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 329,169 |
2403.00999 | Distributional Dataset Distillation with Subtask Decomposition | What does a neural network learn when training from a task-specific dataset? Synthesizing this knowledge is the central idea behind Dataset Distillation, which recent work has shown can be used to compress large datasets into a small set of input-label pairs ($\textit{prototypes}$) that capture essential aspects of the original dataset. In this paper, we make the key observation that existing methods distilling into explicit prototypes are very often suboptimal, incurring in unexpected storage cost from distilled labels. In response, we propose $\textit{Distributional Dataset Distillation}$ (D3), which encodes the data using minimal sufficient per-class statistics and paired with a decoder, we distill dataset into a compact distributional representation that is more memory-efficient compared to prototype-based methods. To scale up the process of learning these representations, we propose $\textit{Federated distillation}$, which decomposes the dataset into subsets, distills them in parallel using sub-task experts and then re-aggregates them. We thoroughly evaluate our algorithm on a three-dimensional metric and show that our method achieves state-of-the-art results on TinyImageNet and ImageNet-1K. Specifically, we outperform the prior art by $6.9\%$ on ImageNet-1K under the storage budget of 2 images per class. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 434,221 |
2407.12425 | Navigating the Noisy Crowd: Finding Key Information for Claim
Verification | Claim verification is a task that involves assessing the truthfulness of a given claim based on multiple evidence pieces. Using large language models (LLMs) for claim verification is a promising way. However, simply feeding all the evidence pieces to an LLM and asking if the claim is factual does not yield good results. The challenge lies in the noisy nature of both the evidence and the claim: evidence passages typically contain irrelevant information, with the key facts hidden within the context, while claims often convey multiple aspects simultaneously. To navigate this "noisy crowd" of information, we propose EACon (Evidence Abstraction and Claim Deconstruction), a framework designed to find key information within evidence and verify each aspect of a claim separately. EACon first finds keywords from the claim and employs fuzzy matching to select relevant keywords for each raw evidence piece. These keywords serve as a guide to extract and summarize critical information into abstracted evidence. Subsequently, EACon deconstructs the original claim into subclaims, which are then verified against both abstracted and raw evidence individually. We evaluate EACon using two open-source LLMs on two challenging datasets. Results demonstrate that EACon consistently and substantially improve LLMs' performance in claim verification. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 473,923 |
2211.04465 | Quantum Persistent Homology for Time Series | Persistent homology, a powerful mathematical tool for data analysis, summarizes the shape of data through tracking topological features across changes in different scales. Classical algorithms for persistent homology are often constrained by running times and memory requirements that grow exponentially on the number of data points. To surpass this problem, two quantum algorithms of persistent homology have been developed based on two different approaches. However, both of these quantum algorithms consider a data set in the form of a point cloud, which can be restrictive considering that many data sets come in the form of time series. In this paper, we alleviate this issue by establishing a quantum Takens's delay embedding algorithm, which turns a time series into a point cloud by considering a pertinent embedding into a higher dimensional space. Having this quantum transformation of time series to point clouds, then one may use a quantum persistent homology algorithm to extract the topological features from the point cloud associated with the original times series. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 329,255 |
2108.08641 | Successive cohorts of Twitter users show increasing activity and
shrinking content horizons | The global public sphere has changed dramatically over the past decades: a significant part of public discourse now takes place on algorithmically driven platforms owned by a handful of private companies. Despite its growing importance, there is scant large-scale academic research on the long-term evolution of user behaviour on these platforms, because the data are often proprietary to the platforms. Here, we evaluate the individual behaviour of 600,000 Twitter users between 2012 and 2019 and find empirical evidence for an acceleration of the way Twitter is used on an individual level. This manifests itself in the fact that cohorts of Twitter users behave differently depending on when they joined the platform. Behaviour within a cohort is relatively consistent over time and characterised by strong internal interactions, but over time behaviour from cohort to cohort shifts towards increased activity. Specifically, we measure this in terms of more tweets per user over time, denser interactions with others via retweets, and shorter content horizons, expressed as an individual's decaying autocorrelation of topics over time. Our observations are explained by a growing proportion of active users who not only tweet more actively but also elicit more retweets. These behaviours suggest a collective contribution to an increased flow of information through each cohort's news feed -- an increase that potentially depletes available collective attention over time. Our findings complement recent, empirical work on social acceleration, which has been largely agnostic about individual user activity. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 251,327 |
1804.00429 | A Vehicle Detection Approach using Deep Learning Methodologies | The purpose of this study is to successfully train our vehicle detector using R-CNN, Faster R-CNN deep learning methods on a sample vehicle data sets and to optimize the success rate of the trained detector by providing efficient results for vehicle detection by testing the trained vehicle detector on the test data. The working method consists of six main stages. These are respectively; loading the data set, the design of the convolutional neural network, configuration of training options, training of the Faster R-CNN object detector and evaluation of trained detector. In addition, in the scope of the study, Faster R-CNN, R-CNN deep learning methods were mentioned and experimental analysis comparisons were made with the results obtained from vehicle detection. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 94,030 |
2209.04412 | Improving Nevergrad's Algorithm Selection Wizard NGOpt through Automated
Algorithm Configuration | Algorithm selection wizards are effective and versatile tools that automatically select an optimization algorithm given high-level information about the problem and available computational resources, such as number and type of decision variables, maximal number of evaluations, possibility to parallelize evaluations, etc. State-of-the-art algorithm selection wizards are complex and difficult to improve. We propose in this work the use of automated configuration methods for improving their performance by finding better configurations of the algorithms that compose them. In particular, we use elitist iterated racing (irace) to find CMA configurations for specific artificial benchmarks that replace the hand-crafted CMA configurations currently used in the NGOpt wizard provided by the Nevergrad platform. We discuss in detail the setup of irace for the purpose of generating configurations that work well over the diverse set of problem instances within each benchmark. Our approach improves the performance of the NGOpt wizard, even on benchmark suites that were not part of the tuning by irace. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 316,774 |
2006.16894 | QoE Based Revenue Maximizing Dynamic Resource Allocation and Pricing for
Fog-Enabled Mission-Critical IoT Applications | Fog computing is becoming a vital component for Internet of things (IoT) applications, acting as its computational engine. Mission-critical IoT applications are highly sensitive to latency, which depends on the physical location of the cloud server. Fog nodes of varying response rates are available to the cloud service provider (CSP) and it is faced with a challenge of forwarding the sequentially received IoT data to one of the fog nodes for processing. Since the arrival times and nature of requests is random, it is important to optimally classify the requests in real-time and allocate available virtual machine instances (VMIs) at the fog nodes to provide a high QoE to the users and consequently generate higher revenues for the CSP. In this paper, we use a pricing policy based on the QoE of the applications as a result of the allocation and obtain an optimal dynamic allocation rule based on the statistical information of the computational requests. The developed solution is statistically optimal, dynamic, and implementable in real-time as opposed to other static matching schemes in the literature. The performance of the proposed framework has been evaluated using simulations and the results show significant improvement as compared with benchmark schemes. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 184,940 |
2302.09765 | ENInst: Enhancing Weakly-supervised Low-shot Instance Segmentation | We address a weakly-supervised low-shot instance segmentation, an annotation-efficient training method to deal with novel classes effectively. Since it is an under-explored problem, we first investigate the difficulty of the problem and identify the performance bottleneck by conducting systematic analyses of model components and individual sub-tasks with a simple baseline model. Based on the analyses, we propose ENInst with sub-task enhancement methods: instance-wise mask refinement for enhancing pixel localization quality and novel classifier composition for improving classification accuracy. Our proposed method lifts the overall performance by enhancing the performance of each sub-task. We demonstrate that our ENInst is 7.5 times more efficient in achieving comparable performance to the existing fully-supervised few-shot models and even outperforms them at times. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 346,556 |
2104.02745 | InverseForm: A Loss Function for Structured Boundary-Aware Segmentation | We present a novel boundary-aware loss term for semantic segmentation using an inverse-transformation network, which efficiently learns the degree of parametric transformations between estimated and target boundaries. This plug-in loss term complements the cross-entropy loss in capturing boundary transformations and allows consistent and significant performance improvement on segmentation backbone models without increasing their size and computational complexity. We analyze the quantitative and qualitative effects of our loss function on three indoor and outdoor segmentation benchmarks, including Cityscapes, NYU-Depth-v2, and PASCAL, integrating it into the training phase of several backbone networks in both single-task and multi-task settings. Our extensive experiments show that the proposed method consistently outperforms baselines, and even sets the new state-of-the-art on two datasets. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 228,832 |
1112.5629 | High-Rank Matrix Completion and Subspace Clustering with Missing Data | This paper considers the problem of completing a matrix with many missing entries under the assumption that the columns of the matrix belong to a union of multiple low-rank subspaces. This generalizes the standard low-rank matrix completion problem to situations in which the matrix rank can be quite high or even full rank. Since the columns belong to a union of subspaces, this problem may also be viewed as a missing-data version of the subspace clustering problem. Let X be an n x N matrix whose (complete) columns lie in a union of at most k subspaces, each of rank <= r < n, and assume N >> kn. The main result of the paper shows that under mild assumptions each column of X can be perfectly recovered with high probability from an incomplete version so long as at least CrNlog^2(n) entries of X are observed uniformly at random, with C>1 a constant depending on the usual incoherence conditions, the geometrical arrangement of subspaces, and the distribution of columns over the subspaces. The result is illustrated with numerical experiments and an application to Internet distance matrix completion and topology identification. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 13,576 |
2308.12031 | CACTUS: a Comprehensive Abstraction and Classification Tool for
Uncovering Structures | The availability of large data sets is providing an impetus for driving current artificial intelligent developments. There are, however, challenges for developing solutions with small data sets due to practical and cost-effective deployment and the opacity of deep learning models. The Comprehensive Abstraction and Classification Tool for Uncovering Structures called CACTUS is presented for improved secure analytics by effectively employing explainable artificial intelligence. It provides additional support for categorical attributes, preserving their original meaning, optimising memory usage, and speeding up the computation through parallelisation. It shows to the user the frequency of the attributes in each class and ranks them by their discriminative power. Its performance is assessed by application to the Wisconsin diagnostic breast cancer and Thyroid0387 data sets. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 387,387 |
2410.00927 | Text Clustering as Classification with LLMs | Text clustering remains valuable in real-world applications where manual labeling is cost-prohibitive. It facilitates efficient organization and analysis of information by grouping similar texts based on their representations. However, implementing this approach necessitates fine-tuned embedders for downstream data and sophisticated similarity metrics. To address this issue, this study presents a novel framework for text clustering that effectively leverages the in-context learning capacity of Large Language Models (LLMs). Instead of fine-tuning embedders, we propose to transform the text clustering into a classification task via LLM. First, we prompt LLM to generate potential labels for a given dataset. Second, after integrating similar labels generated by the LLM, we prompt the LLM to assign the most appropriate label to each sample in the dataset. Our framework has been experimentally proven to achieve comparable or superior performance to state-of-the-art clustering methods that employ embeddings, without requiring complex fine-tuning or clustering algorithms. We make our code available to the public for utilization at https://github.com/ECNU-Text-Computing/Text-Clustering-via-LLM. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 493,538 |
2311.13133 | LIMIT: Less Is More for Instruction Tuning Across Evaluation Paradigms | Large Language Models are traditionally finetuned on large instruction datasets. However recent studies suggest that small, high-quality datasets can suffice for general purpose instruction following. This lack of consensus surrounding finetuning best practices is in part due to rapidly diverging approaches to LLM evaluation. In this study, we ask whether a small amount of diverse finetuning samples can improve performance on both traditional perplexity-based NLP benchmarks, and on open-ended, model-based evaluation. We finetune open-source MPT-7B and MPT-30B models on instruction finetuning datasets of various sizes ranging from 1k to 60k samples. We find that subsets of 1k-6k instruction finetuning samples are sufficient to achieve good performance on both (1) traditional NLP benchmarks and (2) model-based evaluation. Finally, we show that mixing textbook-style and open-ended QA finetuning datasets optimizes performance on both evaluation paradigms. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 409,632 |
1608.06807 | Efficient Training for Positive Unlabeled Learning | Positive unlabeled (PU) learning is useful in various practical situations, where there is a need to learn a classifier for a class of interest from an unlabeled data set, which may contain anomalies as well as samples from unknown classes. The learning task can be formulated as an optimization problem under the framework of statistical learning theory. Recent studies have theoretically analyzed its properties and generalization performance, nevertheless, little effort has been made to consider the problem of scalability, especially when large sets of unlabeled data are available. In this work we propose a novel scalable PU learning algorithm that is theoretically proven to provide the optimal solution, while showing superior computational and memory performance. Experimental evaluation confirms the theoretical evidence and shows that the proposed method can be successfully applied to a large variety of real-world problems involving PU learning. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 60,163 |
1609.04281 | Document Filtering for Long-tail Entities | Filtering relevant documents with respect to entities is an essential task in the context of knowledge base construction and maintenance. It entails processing a time-ordered stream of documents that might be relevant to an entity in order to select only those that contain vital information. State-of-the-art approaches to document filtering for popular entities are entity-dependent: they rely on and are also trained on the specifics of differentiating features for each specific entity. Moreover, these approaches tend to use so-called extrinsic information such as Wikipedia page views and related entities which is typically only available only for popular head entities. Entity-dependent approaches based on such signals are therefore ill-suited as filtering methods for long-tail entities. In this paper we propose a document filtering method for long-tail entities that is entity-independent and thus also generalizes to unseen or rarely seen entities. It is based on intrinsic features, i.e., features that are derived from the documents in which the entities are mentioned. We propose a set of features that capture informativeness, entity-saliency, and timeliness. In particular, we introduce features based on entity aspect similarities, relation patterns, and temporal expressions and combine these with standard features for document filtering. Experiments following the TREC KBA 2014 setup on a publicly available dataset show that our model is able to improve the filtering performance for long-tail entities over several baselines. Results of applying the model to unseen entities are promising, indicating that the model is able to learn the general characteristics of a vital document. The overall performance across all entities---i.e., not just long-tail entities---improves upon the state-of-the-art without depending on any entity-specific training data. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 60,974 |
2003.04745 | Spitzoid Lesions Diagnosis based on GA feature selection and Random
Forest | Spitzoid lesions broadly categorized into Spitz Nevus (SN), Atypical Spitz Tumors (AST), and Spitz Melanomas (SM). The accurate diagnosis of these lesions is one of the most challenges for dermapathologists; this is due to the high similarities between them. Data mining techniques are successfully applied to situations like these where complexity exists. This study aims to develop an artificial intelligence model to support the diagnosis of Spitzoid lesions. A private spitzoid lesions dataset have been used to evaluate the system proposed in this study. The proposed system has three stages. In the first stage, SMOTE method applied to solve the imbalance data problem, in the second stage, in order to eliminate irrelevant features; genetic algorithm is used to select significant features. This later reduces the computational complexity and speed up the data mining process. In the third stage, Random forest classifier is employed to make a decision for two different categories of lesions (Spitz nevus or Atypical Spitz Tumors). The performance of our proposed scheme is evaluated using accuracy, sensitivity, specificity, G-mean, F- measure, ROC and AUC. Results obtained with our SMOTE-GA-RF model with GA-based 16 features show a great performance with accuracy 0.97, F-measure 0.98, AUC 0.98, and G-mean 0.97.Results obtained in this study have potential to open new opportunities in diagnosis of spitzoid lesions. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 167,648 |
2301.02086 | A Probabilistic Framework for Visual Localization in Ambiguous Scenes | Visual localization allows autonomous robots to relocalize when losing track of their pose by matching their current observation with past ones. However, ambiguous scenes pose a challenge for such systems, as repetitive structures can be viewed from many distinct, equally likely camera poses, which means it is not sufficient to produce a single best pose hypothesis. In this work, we propose a probabilistic framework that for a given image predicts the arbitrarily shaped posterior distribution of its camera pose. We do this via a novel formulation of camera pose regression using variational inference, which allows sampling from the predicted distribution. Our method outperforms existing methods on localization in ambiguous scenes. Code and data will be released at https://github.com/efreidun/vapor. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 339,413 |
2307.00858 | Beyond the Snapshot: Brain Tokenized Graph Transformer for Longitudinal
Brain Functional Connectome Embedding | Under the framework of network-based neurodegeneration, brain functional connectome (FC)-based Graph Neural Networks (GNN) have emerged as a valuable tool for the diagnosis and prognosis of neurodegenerative diseases such as Alzheimer's disease (AD). However, these models are tailored for brain FC at a single time point instead of characterizing FC trajectory. Discerning how FC evolves with disease progression, particularly at the predementia stages such as cognitively normal individuals with amyloid deposition or individuals with mild cognitive impairment (MCI), is crucial for delineating disease spreading patterns and developing effective strategies to slow down or even halt disease advancement. In this work, we proposed the first interpretable framework for brain FC trajectory embedding with application to neurodegenerative disease diagnosis and prognosis, namely Brain Tokenized Graph Transformer (Brain TokenGT). It consists of two modules: 1) Graph Invariant and Variant Embedding (GIVE) for generation of node and spatio-temporal edge embeddings, which were tokenized for downstream processing; 2) Brain Informed Graph Transformer Readout (BIGTR) which augments previous tokens with trainable type identifiers and non-trainable node identifiers and feeds them into a standard transformer encoder to readout. We conducted extensive experiments on two public longitudinal fMRI datasets of the AD continuum for three tasks, including differentiating MCI from controls, predicting dementia conversion in MCI, and classification of amyloid positive or negative cognitively normal individuals. Based on brain FC trajectory, the proposed Brain TokenGT approach outperformed all the other benchmark models and at the same time provided excellent interpretability. The code is available at https://github.com/ZijianD/Brain-TokenGT.git | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 377,153 |
2410.17261 | Masked Autoencoder with Swin Transformer Network for Mitigating
Electrode Shift in HD-EMG-based Gesture Recognition | Multi-channel surface Electromyography (sEMG), also referred to as high-density sEMG (HD-sEMG), plays a crucial role in improving gesture recognition performance for myoelectric control. Pattern recognition models developed based on HD-sEMG, however, are vulnerable to changing recording conditions (e.g., signal variability due to electrode shift). This has resulted in significant degradation in performance across subjects, and sessions. In this context, the paper proposes the Masked Autoencoder with Swin Transformer (MAST) framework, where training is performed on a masked subset of HDsEMG channels. A combination of four masking strategies, i.e., random block masking; temporal masking; sensor-wise random masking, and; multi-scale masking, is used to learn latent representations and increase robustness against electrode shift. The masked data is then passed through MAST's three-path encoder-decoder structure, leveraging a multi-path Swin-Unet architecture that simultaneously captures time-domain, frequency-domain, and magnitude-based features of the underlying HD-sEMG signal. These augmented inputs are then used in a self-supervised pre-training fashion to improve the model's generalization capabilities. Experimental results demonstrate the superior performance of the proposed MAST framework in comparison to its counterparts. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 501,388 |
2310.11672 | Open-ended Commonsense Reasoning with Unrestricted Answer Scope | Open-ended Commonsense Reasoning is defined as solving a commonsense question without providing 1) a short list of answer candidates and 2) a pre-defined answer scope. Conventional ways of formulating the commonsense question into a question-answering form or utilizing external knowledge to learn retrieval-based methods are less applicable in the open-ended setting due to an inherent challenge. Without pre-defining an answer scope or a few candidates, open-ended commonsense reasoning entails predicting answers by searching over an extremely large searching space. Moreover, most questions require implicit multi-hop reasoning, which presents even more challenges to our problem. In this work, we leverage pre-trained language models to iteratively retrieve reasoning paths on the external knowledge base, which does not require task-specific supervision. The reasoning paths can help to identify the most precise answer to the commonsense question. We conduct experiments on two commonsense benchmark datasets. Compared to other approaches, our proposed method achieves better performance both quantitatively and qualitatively. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 400,732 |
2403.14014 | Crowdsourcing Task Traces for Service Robotics | Demonstration is an effective end-user development paradigm for teaching robots how to perform new tasks. In this paper, we posit that demonstration is useful not only as a teaching tool, but also as a way to understand and assist end-user developers in thinking about a task at hand. As a first step toward gaining this understanding, we constructed a lightweight web interface to crowdsource step-by-step instructions of common household tasks, leveraging the imaginations and past experiences of potential end-user developers. As evidence of the utility of our interface, we deployed the interface on Amazon Mechanical Turk and collected 207 task traces that span 18 different task categories. We describe our vision for how these task traces can be operationalized as task models within end-user development tools and provide a roadmap for future work. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 439,871 |
2209.12665 | Hybrid AI-based Anomaly Detection Model using Phasor Measurement Unit
Data | Over the last few decades, extensive use of information and communication technologies has been the main driver of the digitalization of power systems. Proper and secure monitoring of the critical grid infrastructure became an integral part of the modern power system. Using phasor measurement units (PMUs) to surveil the power system is one of the technologies that have a promising future. Increased frequency of measurements and smarter methods for data handling can improve the ability to reliably operate power grids. The increased cyber-physical interaction offers both benefits and drawbacks, where one of the drawbacks comes in the form of anomalies in the measurement data. The anomalies can be caused by both physical faults on the power grid, as well as disturbances, errors, and cyber attacks in the cyber layer. This paper aims to develop a hybrid AI-based model that is based on various methods such as Long Short Term Memory (LSTM), Convolutional Neural Network (CNN) and other relevant hybrid algorithms for anomaly detection in phasor measurement unit data. The dataset used within this research was acquired by the University of Texas, which consists of real data from grid measurements. In addition to the real data, false data that has been injected to produce anomalies has been analyzed. The impacts and mitigating methods to prevent such kind of anomalies are discussed. | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | 319,612 |
2009.01368 | Cost-aware Feature Selection for IoT Device Classification | Classification of IoT devices into different types is of paramount importance, from multiple perspectives, including security and privacy aspects. Recent works have explored machine learning techniques for fingerprinting (or classifying) IoT devices, with promising results. However, existing works have assumed that the features used for building the machine learning models are readily available or can be easily extracted from the network traffic; in other words, they do not consider the costs associated with feature extraction. In this work, we take a more realistic approach, and argue that feature extraction has a cost, and the costs are different for different features. We also take a step forward from the current practice of considering the misclassification loss as a binary value, and make a case for different losses based on the misclassification performance. Thereby, and more importantly, we introduce the notion of risk for IoT device classification. We define and formulate the problem of cost-aware IoT device classification. This being a combinatorial optimization problem, we develop a novel algorithm to solve it in a fast and effective way using the Cross-Entropy (CE) based stochastic optimization technique. Using traffic of real devices, we demonstrate the capability of the CE based algorithm in selecting features with minimal risk of misclassification while keeping the cost for feature extraction within a specified limit. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 194,284 |
2306.13954 | Characterizing the Emotion Carriers of COVID-19 Misinformation and Their
Impact on Vaccination Outcomes in India and the United States | The COVID-19 Infodemic had an unprecedented impact on health behaviors and outcomes at a global scale. While many studies have focused on a qualitative and quantitative understanding of misinformation, including sentiment analysis, there is a gap in understanding the emotion-carriers of misinformation and their differences across geographies. In this study, we characterized emotion carriers and their impact on vaccination rates in India and the United States. A manually labelled dataset was created from 2.3 million tweets and collated with three publicly available datasets (CoAID, AntiVax, CMU) to train deep learning models for misinformation classification. Misinformation labelled tweets were further analyzed for behavioral aspects by leveraging Plutchik Transformers to determine the emotion for each tweet. Time series analysis was conducted to study the impact of misinformation on spatial and temporal characteristics. Further, categorical classification was performed using transformer models to assign categories for the misinformation tweets. Word2Vec+BiLSTM was the best model for misinformation classification, with an F1-score of 0.92. The US had the highest proportion of misinformation tweets (58.02%), followed by the UK (10.38%) and India (7.33%). Disgust, anticipation, and anger were associated with an increased prevalence of misinformation tweets. Disgust was the predominant emotion associated with misinformation tweets in the US, while anticipation was the predominant emotion in India. For India, the misinformation rate exhibited a lead relationship with vaccination, while in the US it lagged behind vaccination. Our study deciphered that emotions acted as differential carriers of misinformation across geography and time. These carriers can be monitored to develop strategic interventions for countering misinformation, leading to improved public health. | false | false | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | 375,472 |
1812.03859 | The particle track reconstruction based on deep learning neural networks | One of the most important problems of data processing in high energy and nuclear physics is the event reconstruction. Its main part is the track reconstruction procedure which consists in looking for all tracks that elementary particles leave when they pass through a detector among a huge number of points, so-called hits, produced when flying particles fire detector coordinate planes. Unfortunately, the tracking is seriously impeded by the famous shortcoming of multiwired, strip in GEM detectors due to the appearance in them a lot of fake hits caused by extra spurious crossings of fired strips. Since the number of those fakes is several orders of magnitude greater than for true hits, one faces with the quite serious difficulty to unravel possible track-candidates via true hits ignoring fakes. On the basis of our previous two-stage approach based on hits preprocessing using directed K-d tree search followed by a deep neural classifier we introduce here two new tracking algorithms. Both algorithms combine those two stages in one while using different types of deep neural nets. We show that both proposed deep networks do not require any special preprocessing stage, are more accurate, faster and can be easier parallelized. Preliminary results of our new approaches for simulated events are presented. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 116,096 |
2106.11445 | KEA: Tuning an Exabyte-Scale Data Infrastructure | Microsoft's internal big-data infrastructure is one of the largest in the world -- with over 300k machines running billions of tasks from over 0.6M daily jobs. Operating this infrastructure is a costly and complex endeavor, and efficiency is paramount. In fact, for over 15 years, a dedicated engineering team has tuned almost every aspect of this infrastructure, achieving state-of-the-art efficiency (>60% average CPU utilization across all clusters). Despite rich telemetry and strong expertise, faced with evolving hardware/software/workloads this manual tuning approach had reached its limit -- we had plateaued. In this paper, we present KEA, a multi-year effort to automate our tuning processes to be fully data/model-driven. KEA leverages a mix of domain knowledge and principled data science to capture the essence of our cluster dynamic behavior in a set of machine learning (ML) models based on collected system data. These models power automated optimization procedures for parameter tuning, and inform our leadership in critical decisions around engineering and capacity management (such as hardware and data center design, software investments, etc.). We combine "observational" tuning (i.e., using models to predict system behavior without direct experimentation) with judicious use of "flighting" (i.e., conservative testing in production). This allows us to support a broad range of applications that we discuss in this paper. KEA continuously tunes our cluster configurations and is on track to save Microsoft tens of millions of dollars per year. At the best of our knowledge, this paper is the first to discuss research challenges and practical learnings that emerge when tuning an exabyte-scale data infrastructure. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 242,394 |
2409.18435 | Multi-agent Reinforcement Learning for Dynamic Dispatching in Material
Handling Systems | This paper proposes a multi-agent reinforcement learning (MARL) approach to learn dynamic dispatching strategies, which is crucial for optimizing throughput in material handling systems across diverse industries. To benchmark our method, we developed a material handling environment that reflects the complexities of an actual system, such as various activities at different locations, physical constraints, and inherent uncertainties. To enhance exploration during learning, we propose a method to integrate domain knowledge in the form of existing dynamic dispatching heuristics. Our experimental results show that our method can outperform heuristics by up to 7.4 percent in terms of median throughput. Additionally, we analyze the effect of different architectures on MARL performance when training multiple agents with different functions. We also demonstrate that the MARL agents performance can be further improved by using the first iteration of MARL agents as heuristics to train a second iteration of MARL agents. This work demonstrates the potential of applying MARL to learn effective dynamic dispatching strategies that may be deployed in real-world systems to improve business outcomes. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | true | false | false | false | 492,255 |
2411.03819 | SA3DIP: Segment Any 3D Instance with Potential 3D Priors | The proliferation of 2D foundation models has sparked research into adapting them for open-world 3D instance segmentation. Recent methods introduce a paradigm that leverages superpoints as geometric primitives and incorporates 2D multi-view masks from Segment Anything model (SAM) as merging guidance, achieving outstanding zero-shot instance segmentation results. However, the limited use of 3D priors restricts the segmentation performance. Previous methods calculate the 3D superpoints solely based on estimated normal from spatial coordinates, resulting in under-segmentation for instances with similar geometry. Besides, the heavy reliance on SAM and hand-crafted algorithms in 2D space suffers from over-segmentation due to SAM's inherent part-level segmentation tendency. To address these issues, we propose SA3DIP, a novel method for Segmenting Any 3D Instances via exploiting potential 3D Priors. Specifically, on one hand, we generate complementary 3D primitives based on both geometric and textural priors, which reduces the initial errors that accumulate in subsequent procedures. On the other hand, we introduce supplemental constraints from the 3D space by using a 3D detector to guide a further merging process. Furthermore, we notice a considerable portion of low-quality ground truth annotations in ScanNetV2 benchmark, which affect the fair evaluations. Thus, we present ScanNetV2-INS with complete ground truth labels and supplement additional instances for 3D class-agnostic instance segmentation. Experimental evaluations on various 2D-3D datasets demonstrate the effectiveness and robustness of our approach. Our code and proposed ScanNetV2-INS dataset are available HERE. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 506,050 |
2105.13038 | LVD-NMPC: A Learning-based Vision Dynamics Approach to Nonlinear Model
Predictive Control for Autonomous Vehicles | In this paper, we introduce a learning-based vision dynamics approach to nonlinear model predictive control for autonomous vehicles, coined LVD-NMPC. LVD-NMPC uses an a-priori process model and a learned vision dynamics model used to calculate the dynamics of the driving scene, the controlled system's desired state trajectory and the weighting gains of the quadratic cost function optimized by a constrained predictive controller. The vision system is defined as a deep neural network designed to estimate the dynamics of the images scene. The input is based on historic sequences of sensory observations and vehicle states, integrated by an Augmented Memory component. Deep Q-Learning is used to train the deep network, which once trained can be used to also calculate the desired trajectory of the vehicle. We evaluate LVD-NMPC against a baseline Dynamic Window Approach (DWA) path planning executed using standard NMPC, as well as against the PilotNet neural network. Performance is measured in our simulation environment GridSim, on a real-world 1:8 scaled model car, as well as on a real size autonomous test vehicle and the nuScenes computer vision dataset. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 237,187 |
2305.15420 | A Hybrid Semantic-Geometric Approach for Clutter-Resistant Floorplan
Generation from Building Point Clouds | Building Information Modeling (BIM) technology is a key component of modern construction engineering and project management workflows. As-is BIM models that represent the spatial reality of a project site can offer crucial information to stakeholders for construction progress monitoring, error checking, and building maintenance purposes. Geometric methods for automatically converting raw scan data into BIM models (Scan-to-BIM) often fail to make use of higher-level semantic information in the data. Whereas, semantic segmentation methods only output labels at the point level without creating object level models that is necessary for BIM. To address these issues, this research proposes a hybrid semantic-geometric approach for clutter-resistant floorplan generation from laser-scanned building point clouds. The input point clouds are first pre-processed by normalizing the coordinate system and removing outliers. Then, a semantic segmentation network based on PointNet++ is used to label each point as ceiling, floor, wall, door, stair, and clutter. The clutter points are removed whereas the wall, door, and stair points are used for 2D floorplan generation. A region-growing segmentation algorithm paired with geometric reasoning rules is applied to group the points together into individual building elements. Finally, a 2-fold Random Sample Consensus (RANSAC) algorithm is applied to parameterize the building elements into 2D lines which are used to create the output floorplan. The proposed method is evaluated using the metrics of precision, recall, Intersection-over-Union (IOU), Betti error, and warping error. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 367,616 |
1903.09850 | Action-Centered Information Retrieval | Information Retrieval (IR) aims at retrieving documents that are most relevant to a query provided by a user. Traditional techniques rely mostly on syntactic methods. In some cases, however, links at a deeper semantic level must be considered. In this paper, we explore a type of IR task in which documents describe sequences of events, and queries are about the state of the world after such events. In this context, successfully matching documents and query requires considering the events' possibly implicit, uncertain effects and side-effects. We begin by analyzing the problem, then propose an action language based formalization, and finally automate the corresponding IR task using Answer Set Programming. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 125,151 |
1606.04861 | A necessary and sufficient condition for minimum phase and implications
for phase retrieval | We give a necessary and sufficient condition for a function $E(t)$ being of minimum phase, and hence for its phase being univocally determined by its intensity $|E(t)|^2$. This condition is based on the knowledge of $E(t)$ alone and not of its analytic continuation in the complex plane, thus greatly simplifying its practical applicability. We apply these results to find the class of all band-limited signals that correspond to distinct receiver states when the detector is sensitive to the field intensity only and insensitive to the field phase, and discuss the performance of a recently proposed transmission scheme able to linearly detect all distinguishable states. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 57,322 |
2002.05837 | PushdownDB: Accelerating a DBMS using S3 Computation | This paper studies the effectiveness of pushing parts of DBMS analytics queries into the Simple Storage Service (S3) engine of Amazon Web Services (AWS), using a recently released capability called S3 Select. We show that some DBMS primitives (filter, projection, aggregation) can always be cost-effectively moved into S3. Other more complex operations (join, top-K, group-by) require reimplementation to take advantage of S3 Select and are often candidates for pushdown. We demonstrate these capabilities through experimentation using a new DBMS that we developed, PushdownDB. Experimentation with a collection of queries including TPC-H queries shows that PushdownDB is on average 30% cheaper and 6.7X faster than a baseline that does not use S3 Select. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 164,018 |
2401.06172 | CRISIS ALERT:Forecasting Stock Market Crisis Events Using Machine
Learning Methods | Historically, the economic recession often came abruptly and disastrously. For instance, during the 2008 financial crisis, the SP 500 fell 46 percent from October 2007 to March 2009. If we could detect the signals of the crisis earlier, we could have taken preventive measures. Therefore, driven by such motivation, we use advanced machine learning techniques, including Random Forest and Extreme Gradient Boosting, to predict any potential market crashes mainly in the US market. Also, we would like to compare the performance of these methods and examine which model is better for forecasting US stock market crashes. We apply our models on the daily financial market data, which tend to be more responsive with higher reporting frequencies. We consider 75 explanatory variables, including general US stock market indexes, SP 500 sector indexes, as well as market indicators that can be used for the purpose of crisis prediction. Finally, we conclude, with selected classification metrics, that the Extreme Gradient Boosting method performs the best in predicting US stock market crisis events. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 421,055 |
2305.12272 | Autoregressive Modeling with Lookahead Attention | To predict the next token, autoregressive models ordinarily examine the past. Could they also benefit from also examining hypothetical futures? We consider a novel Transformer-based autoregressive architecture that estimates the next-token distribution by extrapolating multiple continuations of the past, according to some proposal distribution, and attending to these extended strings. This architecture draws insights from classical AI systems such as board game players: when making a local decision, a policy may benefit from exploring possible future trajectories and analyzing them. On multiple tasks including morphological inflection and Boolean satisfiability, our lookahead model is able to outperform the ordinary Transformer model of comparable size. However, on some tasks, it appears to be benefiting from the extra computation without actually using the lookahead information. We discuss possible variant architectures as well as future speedups. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 365,922 |
2011.10334 | Efficient Data-Dependent Learnability | The predictive normalized maximum likelihood (pNML) approach has recently been proposed as the min-max optimal solution to the batch learning problem where both the training set and the test data feature are individuals, known sequences. This approach has yields a learnability measure that can also be interpreted as a stability measure. This measure has shown some potential in detecting out-of-distribution examples, yet it has considerable computational costs. In this project, we propose and analyze an approximation of the pNML, which is based on influence functions. Combining both theoretical analysis and experiments, we show that when applied to neural networks, this approximation can detect out-of-distribution examples effectively. We also compare its performance to that achieved by conducting a single gradient step for each possible label. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 207,482 |
1608.03248 | Combination of LMS Adaptive Filters with Coefficients Feedback | Parallel combinations of adaptive filters have been effectively used to improve the performance of adaptive algorithms and address well-known trade-offs, such as convergence rate vs. steady-state error. Nevertheless, typical combinations suffer from a convergence stagnation issue due to the fact that the component filters run independently. Solutions to this issue usually involve conditional transfers of coefficients between filters, which although effective, are hard to generalize to combinations with more filters or when there is no clearly faster adaptive filter. In this work, a more natural solution is proposed by cyclically feeding back the combined coefficient vector to all component filters. Besides coping with convergence stagnation, this new topology improves tracking and supervisor stability, and bridges an important conceptual gap between combinations of adaptive filters and variable step size schemes. We analyze the steady-state, tracking, and transient performance of this topology for LMS component filters and supervisors with generic activation functions. Numerical examples are used to illustrate how coefficients feedback can improve the performance of parallel combinations at a small computational overhead. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 59,655 |
2004.05076 | Cheetah: Accelerating Database Queries with Switch Pruning | Modern database systems are growing increasingly distributed and struggle to reduce query completion time with a large volume of data. In this paper, we leverage programmable switches in the network to partially offload query computation to the switch. While switches provide high performance, they have resource and programming constraints that make implementing diverse queries difficult. To fit in these constraints, we introduce the concept of data \emph{pruning} -- filtering out entries that are guaranteed not to affect output. The database system then runs the same query but on the pruned data, which significantly reduces processing time. We propose pruning algorithms for a variety of queries. We implement our system, Cheetah, on a Barefoot Tofino switch and Spark. Our evaluation on multiple workloads shows $40 - 200\%$ improvement in the query completion time compared to Spark. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 172,081 |
2306.15347 | FedET: A Communication-Efficient Federated Class-Incremental Learning
Framework Based on Enhanced Transformer | Federated Learning (FL) has been widely concerned for it enables decentralized learning while ensuring data privacy. However, most existing methods unrealistically assume that the classes encountered by local clients are fixed over time. After learning new classes, this assumption will make the model's catastrophic forgetting of old classes significantly severe. Moreover, due to the limitation of communication cost, it is challenging to use large-scale models in FL, which will affect the prediction accuracy. To address these challenges, we propose a novel framework, Federated Enhanced Transformer (FedET), which simultaneously achieves high accuracy and low communication cost. Specifically, FedET uses Enhancer, a tiny module, to absorb and communicate new knowledge, and applies pre-trained Transformers combined with different Enhancers to ensure high precision on various tasks. To address local forgetting caused by new classes of new tasks and global forgetting brought by non-i.i.d (non-independent and identically distributed) class imbalance across different local clients, we proposed an Enhancer distillation method to modify the imbalance between old and new knowledge and repair the non-i.i.d. problem. Experimental results demonstrate that FedET's average accuracy on representative benchmark datasets is 14.1% higher than the state-of-the-art method, while FedET saves 90% of the communication cost compared to the previous method. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 375,993 |
2209.11158 | Multi-Tenant Cloud FPGA: A Survey on Security | With the exponentially increasing demand for performance and scalability in cloud applications and systems, data center architectures evolved to integrate heterogeneous computing fabrics that leverage CPUs, GPUs, and FPGAs. FPGAs differ from traditional processing platforms such as CPUs and GPUs in that they are reconfigurable at run-time, providing increased and customized performance, flexibility, and acceleration. FPGAs can perform large-scale search optimization, acceleration, and signal processing tasks compared with power, latency, and processing speed. Many public cloud provider giants, including Amazon, Huawei, Microsoft, Alibaba, etc., have already started integrating FPGA-based cloud acceleration services. While FPGAs in cloud applications enable customized acceleration with low power consumption, it also incurs new security challenges that still need to be reviewed. Allowing cloud users to reconfigure the hardware design after deployment could open the backdoors for malicious attackers, potentially putting the cloud platform at risk. Considering security risks, public cloud providers still don't offer multi-tenant FPGA services. This paper analyzes the security concerns of multi-tenant cloud FPGAs, gives a thorough description of the security problems associated with them, and discusses upcoming future challenges in this field of study. | false | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | false | 319,094 |
1911.01462 | Time/Accuracy Tradeoffs for Learning a ReLU with respect to Gaussian
Marginals | We consider the problem of computing the best-fitting ReLU with respect to square-loss on a training set when the examples have been drawn according to a spherical Gaussian distribution (the labels can be arbitrary). Let $\mathsf{opt} < 1$ be the population loss of the best-fitting ReLU. We prove: 1. Finding a ReLU with square-loss $\mathsf{opt} + \epsilon$ is as hard as the problem of learning sparse parities with noise, widely thought to be computationally intractable. This is the first hardness result for learning a ReLU with respect to Gaussian marginals, and our results imply -{\emph unconditionally}- that gradient descent cannot converge to the global minimum in polynomial time. 2. There exists an efficient approximation algorithm for finding the best-fitting ReLU that achieves error $O(\mathsf{opt}^{2/3})$. The algorithm uses a novel reduction to noisy halfspace learning with respect to $0/1$ loss. Prior work due to Soltanolkotabi [Sol17] showed that gradient descent can find the best-fitting ReLU with respect to Gaussian marginals, if the training set is exactly labeled by a ReLU. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 152,099 |
2104.00706 | BRepNet: A topological message passing system for solid models | Boundary representation (B-rep) models are the standard way 3D shapes are described in Computer-Aided Design (CAD) applications. They combine lightweight parametric curves and surfaces with topological information which connects the geometric entities to describe manifolds. In this paper we introduce BRepNet, a neural network architecture designed to operate directly on B-rep data structures, avoiding the need to approximate the model as meshes or point clouds. BRepNet defines convolutional kernels with respect to oriented coedges in the data structure. In the neighborhood of each coedge, a small collection of faces, edges and coedges can be identified and patterns in the feature vectors from these entities detected by specific learnable parameters. In addition, to encourage further deep learning research with B-reps, we publish the Fusion 360 Gallery segmentation dataset. A collection of over 35,000 B-rep models annotated with information about the modeling operations which created each face. We demonstrate that BRepNet can segment these models with higher accuracy than methods working on meshes, and point clouds. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 228,090 |
2405.14556 | Deep Learning Classification of Photoplethysmogram Signal for
Hypertension Levels | Continuous photoplethysmography (PPG)-based blood pressure monitoring is necessary for healthcare and fitness applications. In Artificial Intelligence (AI), signal classification levels with the machine and deep learning arrangements need to be explored further. Techniques based on time-frequency spectra, such as Short-time Fourier Transform (STFT), have been used to address the challenges of motion artifact correction. Therefore, the proposed study works with PPG signals of more than 200 patients (650+ signal samples) with hypertension, using STFT with various Neural Networks (Convolution Neural Network (CNN), Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory (Bi-LSTM), followed by machine learning classifiers, such as, Support Vector Machine (SVM) and Random Forest (RF). The classification has been done for two categories: Prehypertension (normal levels) and Hypertension (includes Stage I and Stage II). Various performance metrics have been obtained with two batch sizes of 3 and 16 for the fusion of the neural networks. With precision and specificity of 100% and recall of 82.1%, the LSTM model provides the best results among all combinations of Neural Networks. However, the maximum accuracy of 71.9% is achieved by the LSTM-CNN model. Further stacked Ensemble method has been used to achieve 100% accuracy for Meta-LSTM-RF, Meta- LSTM-CNN-RF and Meta- STFT-CNN-SVM. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 456,489 |
2004.10976 | OF-VO: Efficient Navigation among Pedestrians Using Commodity Sensors | We present a modified velocity-obstacle (VO) algorithm that uses probabilistic partial observations of the environment to compute velocities and navigate a robot to a target. Our system uses commodity visual sensors, including a mono-camera and a 2D Lidar, to explicitly predict the velocities and positions of surrounding obstacles through optical flow estimation, object detection, and sensor fusion. A key aspect of our work is coupling the perception (OF: optical flow) and planning (VO) components for reliable navigation. Overall, our OF-VO algorithm using learning-based perception and model-based planning methods offers better performance than prior algorithms in terms of navigation time and success rate of collision avoidance. Our method also provides bounds on the probabilistic collision avoidance algorithm. We highlight the realtime performance of OF-VO on a Turtlebot navigating among pedestrians in both simulated and real-world scenes. A demo video is available at https://gamma.umd.edu/ofvo/ | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 173,779 |
2210.17141 | Studying inductive biases in image classification task | Recently, self-attention (SA) structures became popular in computer vision fields. They have locally independent filters and can use large kernels, which contradicts the previously popular convolutional neural networks (CNNs). CNNs success was attributed to the hard-coded inductive biases of locality and spatial invariance. However, recent studies have shown that inductive biases in CNNs are too restrictive. On the other hand, the relative position encodings, similar to depthwise (DW) convolution, are necessary for the local SA networks, which indicates that the SA structures are not entirely spatially variant. Hence, we would like to determine which part of inductive biases contributes to the success of the local SA structures. To do so, we introduced context-aware decomposed attention (CADA), which decomposes attention maps into multiple trainable base kernels and accumulates them using context-aware (CA) parameters. This way, we could identify the link between the CNNs and SA networks. We conducted ablation studies using the ResNet50 applied to the ImageNet classification task. DW convolution could have a large locality without increasing computational costs compared to CNNs, but the accuracy saturates with larger kernels. CADA follows this characteristic of locality. We showed that context awareness was the crucial property; however, large local information was not necessary to construct CA parameters. Even though no spatial invariance makes training difficult, more relaxed spatial invariance gave better accuracy than strict spatial invariance. Also, additional strong spatial invariance through relative position encoding was preferable. We extended these experiments to filters for downsampling and showed that locality bias is more critical for downsampling but can remove the strong locality bias using relaxed spatial invariance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 327,583 |
2106.11359 | Photozilla: A Large-Scale Photography Dataset and Visual Embedding for
20 Photography Styles | The advent of social media platforms has been a catalyst for the development of digital photography that engendered a boom in vision applications. With this motivation, we introduce a large-scale dataset termed 'Photozilla', which includes over 990k images belonging to 10 different photographic styles. The dataset is then used to train 3 classification models to automatically classify the images into the relevant style which resulted in an accuracy of ~96%. With the rapid evolution of digital photography, we have seen new types of photography styles emerging at an exponential rate. On that account, we present a novel Siamese-based network that uses the trained classification models as the base architecture to adapt and classify unseen styles with only 25 training samples. We report an accuracy of over 68% for identifying 10 other distinct types of photography styles. This dataset can be found at https://trisha025.github.io/Photozilla/ | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 242,365 |
2010.15034 | Learning Objective Functions Incrementally by Inverse Optimal Control | This paper proposes an inverse optimal control method which enables a robot to incrementally learn a control objective function from a collection of trajectory segments. By saying incrementally, it means that the collection of trajectory segments is enlarged because additional segments are provided as time evolves. The unknown objective function is parameterized as a weighted sum of features with unknown weights. Each trajectory segment is a small snippet of optimal trajectory. The proposed method shows that each trajectory segment, if informative, can pose a linear constraint to the unknown weights, thus, the objective function can be learned by incrementally incorporating all informative segments. Effectiveness of the method is shown on a simulated 2-link robot arm and a 6-DoF maneuvering quadrotor system, in each of which only small demonstration segments are available. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 203,652 |
2210.01295 | Max-Quantile Grouped Infinite-Arm Bandits | In this paper, we consider a bandit problem in which there are a number of groups each consisting of infinitely many arms. Whenever a new arm is requested from a given group, its mean reward is drawn from an unknown reservoir distribution (different for each group), and the uncertainty in the arm's mean reward can only be reduced via subsequent pulls of the arm. The goal is to identify the infinite-arm group whose reservoir distribution has the highest $(1-\alpha)$-quantile (e.g., median if $\alpha = \frac{1}{2}$), using as few total arm pulls as possible. We introduce a two-step algorithm that first requests a fixed number of arms from each group and then runs a finite-arm grouped max-quantile bandit algorithm. We characterize both the instance-dependent and worst-case regret, and provide a matching lower bound for the latter, while discussing various strengths, weaknesses, algorithmic improvements, and potential lower bounds associated with our instance-dependent upper bounds. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 321,201 |
2107.09953 | Characterization Multimodal Connectivity of Brain Network by Hypergraph
GAN for Alzheimer's Disease Analysis | Using multimodal neuroimaging data to characterize brain network is currently an advanced technique for Alzheimer's disease(AD) Analysis. Over recent years the neuroimaging community has made tremendous progress in the study of resting-state functional magnetic resonance imaging (rs-fMRI) derived from blood-oxygen-level-dependent (BOLD) signals and Diffusion Tensor Imaging (DTI) derived from white matter fiber tractography. However, Due to the heterogeneity and complexity between BOLD signals and fiber tractography, Most existing multimodal data fusion algorithms can not sufficiently take advantage of the complementary information between rs-fMRI and DTI. To overcome this problem, a novel Hypergraph Generative Adversarial Networks(HGGAN) is proposed in this paper, which utilizes Interactive Hyperedge Neurons module (IHEN) and Optimal Hypergraph Homomorphism algorithm(OHGH) to generate multimodal connectivity of Brain Network from rs-fMRI combination with DTI. To evaluate the performance of this model, We use publicly available data from the ADNI database to demonstrate that the proposed model not only can identify discriminative brain regions of AD but also can effectively improve classification performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 247,178 |
2201.08472 | Weighted Sum-Rate Maximization for Rate-Splitting Multiple Access Based
Secure Communication | As investigations on physical layer security evolve from point-to-point systems to multi-user scenarios, multi-user interference (MUI) is introduced and becomes an unavoidable issue. Different from treating MUI totally as noise in conventional secure communications, in this paper, we propose a rate-splitting multiple access (RSMA)-based secure beamforming design, where user messages are split and encoded into common and private streams. Each user not only decodes the common stream and the intended private stream, but also tries to eavesdrop the private streams of other users. We formulate a weighted sum-rate (WSR) maximization problem subject to the secrecy rate requirements of all users. To tackle the non-convexity of the formulated problem, a successive convex approximation (SCA)-based approach is adopted to convert the original non-convex and intractable problem into a low-complexity suboptimal iterative algorithm. Numerical results demonstrate that the proposed secure beamforming scheme outperforms the conventional multi-user linear precoding (MULP) technique in terms of the WSR performance while ensuring user secrecy rate requirements. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 276,349 |
2305.14781 | Accelerated Nonconvex ADMM with Self-Adaptive Penalty for
Rank-Constrained Model Identification | The alternating direction method of multipliers (ADMM) has been widely adopted in low-rank approximation and low-order model identification tasks; however, the performance of nonconvex ADMM is highly reliant on the choice of penalty parameter. To accelerate ADMM for solving rank-constrained identification problems, this paper proposes a new self-adaptive strategy for automatic penalty update. Guided by first-order analysis of the increment of the augmented Lagrangian, the self-adaptive penalty updating enables effective and balanced minimization of both primal and dual residuals and thus ensures a stable convergence. Moreover, improved efficiency can be obtained within the Anderson acceleration scheme. Numerical examples show that the proposed strategy significantly accelerates the convergence of nonconvex ADMM while alleviating the critical reliance on tedious tuning of penalty parameters. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 367,252 |
2112.13323 | Airphant: Cloud-oriented Document Indexing | Modern data warehouses can scale compute nodes independently of storage. These systems persist their data on cloud storage, which is always available and cost-efficient. Ad-hoc compute nodes then fetch necessary data on-demand from cloud storage. This ability to quickly scale or shrink data systems is highly beneficial if query workloads may change over time. We apply this new architecture to search engines with a focus on optimizing their latencies in cloud environments. However, simply placing existing search engines (e.g., Apache Lucene) on top of cloud storage significantly increases their end-to-end query latencies (i.e., more than 6 seconds on average in one of our studies). This is because their indexes can incur multiple network round-trips due to their hierarchical structure (e.g., skip lists, B-trees, learned indexes). To address this issue, we develop a new statistical index (called IoU Sketch). For lookup, IoU Sketch makes multiple asynchronous network requests in parallel. While IoU Sketch may fetch more bytes than existing indexes, it significantly reduces the index lookup time because parallel requests do not block each other. Based on IoU Sketch, we build an end-to-end search engine, called Airphant; we describe how Airphant builds, optimizes, and manages IoU Sketch; and ultimately, supports keyword-based querying. In our experiments with four real datasets, Airphant's average end-to-end latencies are between 13 milliseconds and 300 milliseconds, being up to 8.97x faster than Apache Lucence and 113.39x faster than Elasticsearch. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | 273,210 |
1304.5863 | Commonsense Reasoning and Large Network Analysis: A Computational Study
of ConceptNet 4 | In this report a computational study of ConceptNet 4 is performed using tools from the field of network analysis. Part I describes the process of extracting the data from the SQL database that is available online, as well as how the closure of the input among the assertions in the English language is computed. This part also performs a validation of the input as well as checks for the consistency of the entire database. Part II investigates the structural properties of ConceptNet 4. Different graphs are induced from the knowledge base by fixing different parameters. The degrees and the degree distributions are examined, the number and sizes of connected components, the transitivity and clustering coefficient, the cores, information related to shortest paths in the graphs, and cliques. Part III investigates non-overlapping, as well as overlapping communities that are found in ConceptNet 4. Finally, Part IV describes an investigation on rules. | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 24,126 |
1910.03786 | Global Convergence for Replicator Dynamics of Repeated Snowdrift Games | To understand the emergence and sustainment of cooperative behavior in interacting collectives, we perform global convergence analysis for replicator dynamics of a large, well-mixed population of individuals playing a repeated snowdrift game with four typical strategies, which are always cooperate (ALLC), tit-for-tat (TFT), suspicious tit-for-tat (STFT) and always defect (ALLD). The dynamical model is a three-dimensional ODE system that is parameterized by the payoffs of the base game. Instead of routine searches for evolutionarily stable strategies and sets, we expand our analysis to determining the asymptotic behavior of solution trajectories starting from any initial state, and in particular show that for the full range of payoffs, every trajectory of the system converges to an equilibrium point. The convergence results highlight three findings that are of particular importance for understanding the cooperation mechanisms among self-interested agents playing repeated snowdrift games. First, the inclusion of TFT- and STFT-players, the two types of conditional strategy players in the game, increases the share of cooperators of the overall population compared to the situation when the population consists of only ALLC- and ALLD-players. This confirms findings in biology and sociology that reciprocity may promote cooperation in social collective actions, such as reducing traffic jams and division of labor, where each individual may gain more to play the opposite of what her opponent chooses. Second, surprisingly enough, regardless of the payoffs, there always exists a set of initial conditions under which ALLC players do not vanish in the long run, which does not hold for all the other three types of players. So an ALLC-player, although perceived as the one that can be easily taken advantage of in snowdrift games, has certain endurance in the long run. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 148,585 |
2210.05457 | Are Pretrained Multilingual Models Equally Fair Across Languages? | Pretrained multilingual language models can help bridge the digital language divide, enabling high-quality NLP models for lower resourced languages. Studies of multilingual models have so far focused on performance, consistency, and cross-lingual generalisation. However, with their wide-spread application in the wild and downstream societal impact, it is important to put multilingual models under the same scrutiny as monolingual models. This work investigates the group fairness of multilingual models, asking whether these models are equally fair across languages. To this end, we create a new four-way multilingual dataset of parallel cloze test examples (MozArt), equipped with demographic information (balanced with regard to gender and native tongue) about the test participants. We evaluate three multilingual models on MozArt -- mBERT, XLM-R, and mT5 -- and show that across the four target languages, the three models exhibit different levels of group disparity, e.g., exhibiting near-equal risk for Spanish, but high levels of disparity for German. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 322,860 |
2301.06372 | Disambiguation of One-Shot Visual Classification Tasks: A Simplex-Based
Approach | The field of visual few-shot classification aims at transferring the state-of-the-art performance of deep learning visual systems onto tasks where only a very limited number of training samples are available. The main solution consists in training a feature extractor using a large and diverse dataset to be applied to the considered few-shot task. Thanks to the encoded priors in the feature extractors, classification tasks with as little as one example (or "shot'') for each class can be solved with high accuracy, even when the shots display individual features not representative of their classes. Yet, the problem becomes more complicated when some of the given shots display multiple objects. In this paper, we present a strategy which aims at detecting the presence of multiple and previously unseen objects in a given shot. This methodology is based on identifying the corners of a simplex in a high dimensional space. We introduce an optimization routine and showcase its ability to successfully detect multiple (previously unseen) objects in raw images. Then, we introduce a downstream classifier meant to exploit the presence of multiple objects to improve the performance of few-shot classification, in the case of extreme settings where only one shot is given for its class. Using standard benchmarks of the field, we show the ability of the proposed method to slightly, yet statistically significantly, improve accuracy in these settings. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 340,629 |
cs/0501017 | Public Key Cryptography based on Semigroup Actions | A generalization of the original Diffie-Hellman key exchange in $(\Z/p\Z)^*$ found a new depth when Miller and Koblitz suggested that such a protocol could be used with the group over an elliptic curve. In this paper, we propose a further vast generalization where abelian semigroups act on finite sets. We define a Diffie-Hellman key exchange in this setting and we illustrate how to build interesting semigroup actions using finite (simple) semirings. The practicality of the proposed extensions rely on the orbit sizes of the semigroup actions and at this point it is an open question how to compute the sizes of these orbits in general and also if there exists a square root attack in general. In Section 2 a concrete practical semigroup action built from simple semirings is presented. It will require further research to analyse this system. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 538,483 |
2501.10953 | Channel Coding for Gaussian Channels with Mean and Variance Constraints | We consider channel coding for Gaussian channels with the recently introduced mean and variance cost constraints. Through matching converse and achievability bounds, we characterize the optimal first- and second-order performance. The main technical contribution of this paper is an achievability scheme which uses random codewords drawn from a mixture of three uniform distributions on $(n-1)$-spheres of radii $R_1, R_2$ and $R_3$, where $R_i = O(\sqrt{n})$ and $|R_i - R_j| = O(1)$. To analyze such a mixture distribution, we prove a lemma giving a uniform $O(\log n)$ bound, which holds with high probability, on the log ratio of the output distributions $Q_i^{cc}$ and $Q_j^{cc}$, where $Q_i^{cc}$ is induced by a random channel input uniformly distributed on an $(n-1)$-sphere of radius $R_i$. To facilitate the application of the usual central limit theorem, we also give a uniform $O(\log n)$ bound, which holds with high probability, on the log ratio of the output distributions $Q_i^{cc}$ and $Q^*_i$, where $Q_i^*$ is induced by a random channel input with i.i.d. components. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 525,731 |
2306.09187 | MolCAP: Molecular Chemical reActivity pretraining and
prompted-finetuning enhanced molecular representation learning | Molecular representation learning (MRL) is a fundamental task for drug discovery. However, previous deep-learning (DL) methods focus excessively on learning robust inner-molecular representations by mask-dominated pretraining framework, neglecting abundant chemical reactivity molecular relationships that have been demonstrated as the determining factor for various molecular property prediction tasks. Here, we present MolCAP to promote MRL, a graph pretraining Transformer based on chemical reactivity (IMR) knowledge with prompted finetuning. Results show that MolCAP outperforms comparative methods based on traditional molecular pretraining framework, in 13 publicly available molecular datasets across a diversity of biomedical tasks. Prompted by MolCAP, even basic graph neural networks are capable of achieving surprising performance that outperforms previous models, indicating the promising prospect of applying reactivity information for MRL. In addition, manual designed molecular templets are potential to uncover the dataset bias. All in all, we expect our MolCAP to gain more chemical meaningful insights for the entire process of drug discovery. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 373,707 |
1602.00482 | Memory-Based Data-Driven MRAC Architecture Ensuring Parameter
Convergence | Convergence of controller parameters in standard model reference adaptive control (MRAC) requires the system states to be persistently exciting (PE), a restrictive condition to be verified online. A recent data-driven approach, concurrent learning, uses information-rich past data concurrently with the standard parameter update laws to guarantee parameter convergence without the need of the PE condition. This method guarantees exponential convergence of both the tracking and the controller parameter estimation errors to zero, whereas, the classical MRAC merely ensures asymptotic convergence of tracking error to zero. However, the method requires knowledge of the state derivative, at least at the time instances when the state values are stored in memory. The method further assumes knowledge of the control allocation matrix. This paper addresses these limitations by using a memory-based finite-time system identifier in conjunction with a data-driven approach, leading to convergence of both the tracking and the controller parameter estimation errors without the PE condition and knowledge of the system matrices and the state derivative. A Lyapunov based stability proof is included to justify the validity of the proposed data-driven approach. Simulation results demonstrate the efficacy of the suggested method. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 51,586 |
2307.00035 | Parameter Identification for Partial Differential Equations with
Spatiotemporal Varying Coefficients | To comprehend complex systems with multiple states, it is imperative to reveal the identity of these states by system outputs. Nevertheless, the mathematical models describing these systems often exhibit nonlinearity so that render the resolution of the parameter inverse problem from the observed spatiotemporal data a challenging endeavor. Starting from the observed data obtained from such systems, we propose a novel framework that facilitates the investigation of parameter identification for multi-state systems governed by spatiotemporal varying parametric partial differential equations. Our framework consists of two integral components: a constrained self-adaptive physics-informed neural network, encompassing a sub-network, as our methodology for parameter identification, and a finite mixture model approach to detect regions of probable parameter variations. Through our scheme, we can precisely ascertain the unknown varying parameters of the complex multi-state system, thereby accomplishing the inversion of the varying parameters. Furthermore, we have showcased the efficacy of our framework on two numerical cases: the 1D Burgers' equation with time-varying parameters and the 2D wave equation with a space-varying parameter. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 376,845 |
2406.06999 | Teaching with Uncertainty: Unleashing the Potential of Knowledge
Distillation in Object Detection | Knowledge distillation (KD) is a widely adopted and effective method for compressing models in object detection tasks. Particularly, feature-based distillation methods have shown remarkable performance. Existing approaches often ignore the uncertainty in the teacher model's knowledge, which stems from data noise and imperfect training. This limits the student model's ability to learn latent knowledge, as it may overly rely on the teacher's imperfect guidance. In this paper, we propose a novel feature-based distillation paradigm with knowledge uncertainty for object detection, termed "Uncertainty Estimation-Discriminative Knowledge Extraction-Knowledge Transfer (UET)", which can seamlessly integrate with existing distillation methods. By leveraging the Monte Carlo dropout technique, we introduce knowledge uncertainty into the training process of the student model, facilitating deeper exploration of latent knowledge. Our method performs effectively during the KD process without requiring intricate structures or extensive computational resources. Extensive experiments validate the effectiveness of our proposed approach across various distillation strategies, detectors, and backbone architectures. Specifically, following our proposed paradigm, the existing FGD method achieves state-of-the-art (SoTA) performance, with ResNet50-based GFL achieving 44.1% mAP on the COCO dataset, surpassing the baselines by 3.9%. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 462,853 |
2104.06773 | HoughNet: Integrating near and long-range evidence for visual detection | This paper presents HoughNet, a one-stage, anchor-free, voting-based, bottom-up object detection method. Inspired by the Generalized Hough Transform, HoughNet determines the presence of an object at a certain location by the sum of the votes cast on that location. Votes are collected from both near and long-distance locations based on a log-polar vote field. Thanks to this voting mechanism, HoughNet is able to integrate both near and long-range, class-conditional evidence for visual recognition, thereby generalizing and enhancing current object detection methodology, which typically relies on only local evidence. On the COCO dataset, HoughNet's best model achieves $46.4$ $AP$ (and $65.1$ $AP_{50}$), performing on par with the state-of-the-art in bottom-up object detection and outperforming most major one-stage and two-stage methods. We further validate the effectiveness of our proposal in other visual detection tasks, namely, video object detection, instance segmentation, 3D object detection and keypoint detection for human pose estimation, and an additional "labels to photo" image generation task, where the integration of our voting module consistently improves performance in all cases. Code is available at https://github.com/nerminsamet/houghnet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 230,191 |
1501.07788 | Human diffusion and city influence | Cities are characterized by concentrating population, economic activity and services. However, not all cities are equal and a natural hierarchy at local, regional or global scales spontaneously emerges. In this work, we introduce a method to quantify city influence using geolocated tweets to characterize human mobility. Rome and Paris appear consistently as the cities attracting most diverse visitors. The ratio between locals and non-local visitors turns out to be fundamental for a city to truly be global. Focusing only on urban residents' mobility flows, a city to city network can be constructed. This network allows us to analyze centrality measures at different scales. New York and London play a predominant role at the global scale, while urban rankings suffer substantial changes if the focus is set at a regional level. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 39,742 |
2303.00132 | Onboard dynamic-object detection and tracking for autonomous robot
navigation with RGB-D camera | Deploying autonomous robots in crowded indoor environments usually requires them to have accurate dynamic obstacle perception. Although plenty of previous works in the autonomous driving field have investigated the 3D object detection problem, the usage of dense point clouds from a heavy Light Detection and Ranging (LiDAR) sensor and their high computation cost for learning-based data processing make those methods not applicable to small robots, such as vision-based UAVs with small onboard computers. To address this issue, we propose a lightweight 3D dynamic obstacle detection and tracking (DODT) method based on an RGB-D camera, which is designed for low-power robots with limited computing power. Our method adopts a novel ensemble detection strategy, combining multiple computationally efficient but low-accuracy detectors to achieve real-time high-accuracy obstacle detection. Besides, we introduce a new feature-based data association and tracking method to prevent mismatches utilizing point clouds' statistical features. In addition, our system includes an optional and auxiliary learning-based module to enhance the obstacle detection range and dynamic obstacle identification. The proposed method is implemented in a small quadcopter, and the results show that our method can achieve the lowest position error (0.11m) and a comparable velocity error (0.23m/s) across the benchmarking algorithms running on the robot's onboard computer. The flight experiments prove that the tracking results from the proposed method can make the robot efficiently alter its trajectory for navigating dynamic environments. Our software is available on GitHub as an open-source ROS package. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 348,490 |
1906.07840 | A Static Analysis-based Cross-Architecture Performance Prediction Using
Machine Learning | Porting code from CPU to GPU is costly and time-consuming; Unless much time is invested in development and optimization, it is not obvious, a priori, how much speed-up is achievable or how much room is left for improvement. Knowing the potential speed-up a priori can be very useful: It can save hundreds of engineering hours, help programmers with prioritization and algorithm selection. We aim to address this problem using machine learning in a supervised setting, using solely the single-threaded source code of the program, without having to run or profile the code. We propose a static analysis-based cross-architecture performance prediction framework (Static XAPP) which relies solely on program properties collected using static analysis of the CPU source code and predicts whether the potential speed-up is above or below a given threshold. We offer preliminary results that show we can achieve 94% accuracy in binary classification, in average, across different thresholds | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 135,695 |
2305.14773 | Robust Imaging Sonar-based Place Recognition and Localization in
Underwater Environments | Place recognition using SOund Navigation and Ranging (SONAR) images is an important task for simultaneous localization and mapping(SLAM) in underwater environments. This paper proposes a robust and efficient imaging SONAR based place recognition, SONAR context, and loop closure method. Unlike previous methods, our approach encodes geometric information based on the characteristics of raw SONAR measurements without prior knowledge or training. We also design a hierarchical searching procedure for fast retrieval of candidate SONAR frames and apply adaptive shifting and padding to achieve robust matching on rotation and translation changes. In addition, we can derive the initial pose through adaptive shifting and apply it to the iterative closest point (ICP) based loop closure factor. We evaluate the performance of SONAR context in the various underwater sequences such as simulated open water, real water tank, and real underwater environments. The proposed approach shows the robustness and improvements of place recognition on various datasets and evaluation metrics. Supplementary materials are available at https://github.com/sparolab/sonar_context.git. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 367,248 |
2408.11196 | Robust Long-Range Perception Against Sensor Misalignment in Autonomous
Vehicles | Advances in machine learning algorithms for sensor fusion have significantly improved the detection and prediction of other road users, thereby enhancing safety. However, even a small angular displacement in the sensor's placement can cause significant degradation in output, especially at long range. In this paper, we demonstrate a simple yet generic and efficient multi-task learning approach that not only detects misalignment between different sensor modalities but is also robust against them for long-range perception. Along with the amount of misalignment, our method also predicts calibrated uncertainty, which can be useful for filtering and fusing predicted misalignment values over time. In addition, we show that the predicted misalignment parameters can be used for self-correcting input sensor data, further improving the perception performance under sensor misalignment. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 482,173 |
2208.03949 | Extrinsic Camera Calibration with Semantic Segmentation | Monocular camera sensors are vital to intelligent vehicle operation and automated driving assistance and are also heavily employed in traffic control infrastructure. Calibrating the monocular camera, though, is time-consuming and often requires significant manual intervention. In this work, we present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information from images and point clouds. Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle with high-precision localization to capture a point cloud of the camera environment. Afterward, a mapping between the camera and world coordinate spaces is obtained by performing a lidar-to-camera registration of the semantically segmented sensor data. We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results. Our approach is suitable for infrastructure sensors as well as vehicle sensors, while it does not require motion of the camera platform. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 311,950 |
2309.07798 | Enhancing Performance, Calibration Time and Efficiency in Brain-Machine
Interfaces through Transfer Learning and Wearable EEG Technology | Brain-machine interfaces (BMIs) have emerged as a transformative force in assistive technologies, empowering individuals with motor impairments by enabling device control and facilitating functional recovery. However, the persistent challenge of inter-session variability poses a significant hurdle, requiring time-consuming calibration at every new use. Compounding this issue, the low comfort level of current devices further restricts their usage. To address these challenges, we propose a comprehensive solution that combines a tiny CNN-based Transfer Learning (TL) approach with a comfortable, wearable EEG headband. The novel wearable EEG device features soft dry electrodes placed on the headband and is capable of on-board processing. We acquire multiple sessions of motor-movement EEG data and achieve up to 96% inter-session accuracy using TL, greatly reducing the calibration time and improving usability. By executing the inference on the edge every 100ms, the system is estimated to achieve 30h of battery life. The comfortable BMI setup with tiny CNN and TL paves the way to future on-device continual learning, essential for tackling inter-session variability and improving usability. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 391,910 |
2411.18101 | Aligning Knowledge Concepts to Whole Slide Images for Precise
Histopathology Image Analysis | Due to the large size and lack of fine-grained annotation, Whole Slide Images (WSIs) analysis is commonly approached as a Multiple Instance Learning (MIL) problem. However, previous studies only learn from training data, posing a stark contrast to how human clinicians teach each other and reason about histopathologic entities and factors. Here we present a novel knowledge concept-based MIL framework, named ConcepPath to fill this gap. Specifically, ConcepPath utilizes GPT-4 to induce reliable diseasespecific human expert concepts from medical literature, and incorporate them with a group of purely learnable concepts to extract complementary knowledge from training data. In ConcepPath, WSIs are aligned to these linguistic knowledge concepts by utilizing pathology vision-language model as the basic building component. In the application of lung cancer subtyping, breast cancer HER2 scoring, and gastric cancer immunotherapy-sensitive subtyping task, ConcepPath significantly outperformed previous SOTA methods which lack the guidance of human expert knowledge. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 511,731 |
2402.00763 | 360-GS: Layout-guided Panoramic Gaussian Splatting For Indoor Roaming | 3D Gaussian Splatting (3D-GS) has recently attracted great attention with real-time and photo-realistic renderings. This technique typically takes perspective images as input and optimizes a set of 3D elliptical Gaussians by splatting them onto the image planes, resulting in 2D Gaussians. However, applying 3D-GS to panoramic inputs presents challenges in effectively modeling the projection onto the spherical surface of ${360^\circ}$ images using 2D Gaussians. In practical applications, input panoramas are often sparse, leading to unreliable initialization of 3D Gaussians and subsequent degradation of 3D-GS quality. In addition, due to the under-constrained geometry of texture-less planes (e.g., walls and floors), 3D-GS struggles to model these flat regions with elliptical Gaussians, resulting in significant floaters in novel views. To address these issues, we propose 360-GS, a novel $360^{\circ}$ Gaussian splatting for a limited set of panoramic inputs. Instead of splatting 3D Gaussians directly onto the spherical surface, 360-GS projects them onto the tangent plane of the unit sphere and then maps them to the spherical projections. This adaptation enables the representation of the projection using Gaussians. We guide the optimization of 360-GS by exploiting layout priors within panoramas, which are simple to obtain and contain strong structural information about the indoor scene. Our experimental results demonstrate that 360-GS allows panoramic rendering and outperforms state-of-the-art methods with fewer artifacts in novel view synthesis, thus providing immersive roaming in indoor scenarios. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 425,713 |
2407.13178 | The use of the symmetric finite difference in the local binary pattern
(symmetric LBP) | The paper provides a mathematical view to the binary numbers presented in the Local Binary Pattern (LBP) feature extraction process. Symmetric finite difference is often applied in numerical analysis to enhance the accuracy of approximations. Then, the paper investigates utilization of the symmetric finite difference in the LBP formulation for face detection and facial expression recognition. It introduces a novel approach that extends the standard LBP, which typically employs eight directional derivatives, to incorporate only four directional derivatives. This approach is named symmetric LBP. The number of LBP features is reduced to 16 from 256 by the use of the symmetric LBP. The study underscores the significance of the number of directions considered in the new approach. Consequently, the results obtained emphasize the importance of the research topic. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 474,270 |
2106.08043 | CausalNLP: A Practical Toolkit for Causal Inference with Text | Causal inference is the process of estimating the effect or impact of a treatment on an outcome with other covariates as potential confounders (and mediators) that may need to be controlled. The vast majority of existing methods and systems for causal inference assume that all variables under consideration are categorical or numerical (e.g., gender, price, enrollment). In this paper, we present CausalNLP, a toolkit for inferring causality with observational data that includes text in addition to traditional numerical and categorical variables. CausalNLP employs the use of meta learners for treatment effect estimation and supports using raw text and its linguistic properties as a treatment, an outcome, or a "controlled-for" variable (e.g., confounder). The library is open source and available at: https://github.com/amaiya/causalnlp. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 241,164 |
2309.13525 | Semi-Supervised Domain Generalization for Object Detection via
Language-Guided Feature Alignment | Existing domain adaptation (DA) and generalization (DG) methods in object detection enforce feature alignment in the visual space but face challenges like object appearance variability and scene complexity, which make it difficult to distinguish between objects and achieve accurate detection. In this paper, we are the first to address the problem of semi-supervised domain generalization by exploring vision-language pre-training and enforcing feature alignment through the language space. We employ a novel Cross-Domain Descriptive Multi-Scale Learning (CDDMSL) aiming to maximize the agreement between descriptions of an image presented with different domain-specific characteristics in the embedding space. CDDMSL significantly outperforms existing methods, achieving 11.7% and 7.5% improvement in DG and DA settings, respectively. Comprehensive analysis and ablation studies confirm the effectiveness of our method, positioning CDDMSL as a promising approach for domain generalization in object detection tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 394,245 |
1909.02041 | Regression-clustering for Improved Accuracy and Training Cost with
Molecular-Orbital-Based Machine Learning | Machine learning (ML) in the representation of molecular-orbital-based (MOB) features has been shown to be an accurate and transferable approach to the prediction of post-Hartree-Fock correlation energies. Previous applications of MOB-ML employed Gaussian Process Regression (GPR), which provides good prediction accuracy with small training sets; however, the cost of GPR training scales cubically with the amount of data and becomes a computational bottleneck for large training sets. In the current work, we address this problem by introducing a clustering/regression/classification implementation of MOB-ML. In a first step, regression clustering (RC) is used to partition the training data to best fit an ensemble of linear regression (LR) models; in a second step, each cluster is regressed independently, using either LR or GPR; and in a third step, a random forest classifier (RFC) is trained for the prediction of cluster assignments based on MOB feature values. Upon inspection, RC is found to recapitulate chemically intuitive groupings of the frontier molecular orbitals, and the combined RC/LR/RFC and RC/GPR/RFC implementations of MOB-ML are found to provide good prediction accuracy with greatly reduced wall-clock training times. For a dataset of thermalized geometries of 7211 organic molecules of up to seven heavy atoms, both implementations reach chemical accuracy (1 kcal/mol error) with only 300 training molecules, while providing 35000-fold and 4500-fold reductions in the wall-clock training time, respectively, compared to MOB-ML without clustering. The resulting models are also demonstrated to retain transferability for the prediction of large-molecule energies with only small-molecule training data. Finally, it is shown that capping the number of training datapoints per cluster leads to further improvements in prediction accuracy with negligible increases in wall-clock training time. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 144,071 |
2408.16858 | Invariants of the quantum graph of the partial trace | We compute the independence number, zero-error capacity, and the values of the Lov\'asz function and the quantum Lov\'asz function for the quantum graph associated to the partial trace quantum channel $\operatorname{Tr}_n\otimes\mathrm{id}_k\colon\operatorname{B}(\mathbb{C}^n\otimes\mathbb{C}^k)\to\operatorname{B}(\mathbb{C}^k)$. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 484,473 |
1910.14574 | An Abstraction-Based Framework for Neural Network Verification | Deep neural networks are increasingly being used as controllers for safety-critical systems. Because neural networks are opaque, certifying their correctness is a significant challenge. To address this issue, several neural network verification approaches have recently been proposed. However, these approaches afford limited scalability, and applying them to large networks can be challenging. In this paper, we propose a framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network - thus making it more amenable to verification. We perform the approximation such that if the property holds for the smaller (abstract) network, it holds for the original as well. The over-approximation may be too coarse, in which case the underlying verification tool might return a spurious counterexample. Under such conditions, we perform counterexample-guided refinement to adjust the approximation, and then repeat the process. Our approach is orthogonal to, and can be integrated with, many existing verification techniques. For evaluation purposes, we integrate it with the recently proposed Marabou framework, and observe a significant improvement in Marabou's performance. Our experiments demonstrate the great potential of our approach for verifying larger neural networks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 151,685 |
1705.00840 | Pointed subspace approach to incomplete data | Incomplete data are often represented as vectors with filled missing attributes joined with flag vectors indicating missing components. In this paper we generalize this approach and represent incomplete data as pointed affine subspaces. This allows to perform various affine transformations of data, as whitening or dimensionality reduction. We embed such generalized missing data into a vector space by mapping pointed affine subspace (generalized missing data point) to a vector containing imputed values joined with a corresponding projection matrix. Such an operation preserves the scalar product of the embedding defined for flag vectors and allows to input transformed incomplete data to typical classification methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 72,761 |
1405.3486 | ESmodels: An Epistemic Specification Solver | (To appear in Theory and Practice of Logic Programming (TPLP)) ESmodels is designed and implemented as an experiment platform to investigate the semantics, language, related reasoning algorithms, and possible applications of epistemic specifications.We first give the epistemic specification language of ESmodels and its semantics. The language employs only one modal operator K but we prove that it is able to represent luxuriant modal operators by presenting transformation rules. Then, we describe basic algorithms and optimization approaches used in ESmodels. After that, we discuss possible applications of ESmodels in conformant planning and constraint satisfaction. Finally, we conclude with perspectives. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 33,091 |
1812.01688 | How Energy-Efficient Can a Wireless Communication System Become? | The data traffic in wireless networks is steadily growing. The long-term trend follows Cooper's law, where the traffic is doubled every two-and-a-half year, and it will likely continue for decades to come. The data transmission is tightly connected with the energy consumption in the power amplifiers, transceiver hardware, and baseband processing. The relation is captured by the energy efficiency metric, measured in bit/Joule, which describes how much energy is consumed per correctly received information bit. While the data rate is fundamentally limited by the channel capacity, there is currently no clear understanding of how energy-efficient a communication system can become. Current research papers typically present values on the order of 10 Mbit/Joule, while previous network generations seem to operate at energy efficiencies on the order of 10 kbit/Joule. Is this roughly as energy-efficient future systems (5G and beyond) can become, or are we still far from the physical limits? These questions are answered in this paper. We analyze a different cases representing potential future deployment and hardware characteristics. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 115,577 |
1812.07617 | Towards Deep Conversational Recommendations | There has been growing interest in using neural networks and deep learning techniques to create dialogue systems. Conversational recommendation is an interesting setting for the scientific exploration of dialogue with natural language as the associated discourse involves goal-driven dialogue that often transforms naturally into more free-form chat. This paper provides two contributions. First, until now there has been no publicly available large-scale dataset consisting of real-world dialogues centered around recommendations. To address this issue and to facilitate our exploration here, we have collected ReDial, a dataset consisting of over 10,000 conversations centered around the theme of providing movie recommendations. We make this data available to the community for further research. Second, we use this dataset to explore multiple facets of conversational recommendations. In particular we explore new neural architectures, mechanisms, and methods suitable for composing conversational recommendation systems. Our dataset allows us to systematically probe model sub-components addressing different parts of the overall problem domain ranging from: sentiment analysis and cold-start recommendation generation to detailed aspects of how natural language is used in this setting in the real world. We combine such sub-components into a full-blown dialogue system and examine its behavior. | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | 116,845 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.