id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2108.12159 | Anomaly Detection of Defect using Energy of Point Pattern Features
within Random Finite Set Framework | In this paper, we propose an efficient approach for industrial defect detection that is modeled based on anomaly detection using point pattern data. Most recent works use \textit{global features} for feature extraction to summarize image content. However, global features are not robust against lighting and viewpoint changes and do not describe the image's geometrical information to be fully utilized in the manufacturing industry. To the best of our knowledge, we are the first to propose using transfer learning of local/point pattern features to overcome these limitations and capture geometrical information of the image regions. We model these local/point pattern features as a random finite set (RFS). In addition we propose RFS energy, in contrast to RFS likelihood as anomaly score. The similarity distribution of point pattern features of the normal sample has been modeled as a multivariate Gaussian. Parameters learning of the proposed RFS energy does not require any heavy computation. We evaluate the proposed approach on the MVTec AD dataset, a multi-object defect detection dataset. Experimental results show the outstanding performance of our proposed approach compared to the state-of-the-art methods, and the proposed RFS energy outperforms the state-of-the-art in the few shot learning settings. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 252,411 |
1710.05741 | A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised
Learning | This paper takes a step towards temporal reasoning in a dynamically changing video, not in the pixel space that constitutes its frames, but in a latent space that describes the non-linear dynamics of the objects in its world. We introduce the Kalman variational auto-encoder, a framework for unsupervised learning of sequential data that disentangles two latent representations: an object's representation, coming from a recognition model, and a latent state describing its dynamics. As a result, the evolution of the world can be imagined and missing data imputed, both without the need to generate high dimensional frames at each time step. The model is trained end-to-end on videos of a variety of simulated physical systems, and outperforms competing methods in generative and missing data imputation tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 82,681 |
1804.07270 | Deep Dynamic Boosted Forest | Random forest is widely exploited as an ensemble learning method. In many practical applications, however, there is still a significant challenge to learn from imbalanced data. To alleviate this limitation, we propose a deep dynamic boosted forest (DDBF), a novel ensemble algorithm that incorporates the notion of hard example mining into random forest. Specically, we propose to measure the quality of each leaf node of every decision tree in the random forest to determine hard examples. By iteratively training and then removing easy examples from training data, we evolve the random forest to focus on hard examples dynamically so as to balance the proportion of samples and learn decision boundaries better. Data can be cascaded through these random forests learned in each iteration in sequence to generate more accurate predictions. Our DDBF outperforms random forest on 5 UCI datasets, MNIST and SATIMAGE, and achieved state-of-the-art results compared to other deep models. Moreover, we show that DDBF is also a new way of sampling and can be very useful and efficient when learning from imbalanced data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 95,494 |
1906.02402 | An Analysis of Emotion Communication Channels in Fan Fiction: Towards
Emotional Storytelling | Centrality of emotion for the stories told by humans is underpinned by numerous studies in literature and psychology. The research in automatic storytelling has recently turned towards emotional storytelling, in which characters' emotions play an important role in the plot development. However, these studies mainly use emotion to generate propositional statements in the form "A feels affection towards B" or "A confronts B". At the same time, emotional behavior does not boil down to such propositional descriptions, as humans display complex and highly variable patterns in communicating their emotions, both verbally and non-verbally. In this paper, we analyze how emotions are expressed non-verbally in a corpus of fan fiction short stories. Our analysis shows that stories written by humans convey character emotions along various non-verbal channels. We find that some non-verbal channels, such as facial expressions and voice characteristics of the characters, are more strongly associated with joy, while gestures and body postures are more likely to occur with trust. Based on our analysis, we argue that automatic storytelling systems should take variability of emotion into account when generating descriptions of characters' emotions. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 134,042 |
1909.04812 | Proceedings of the AI-HRI Symposium at AAAI-FSS 2019 | The past few years have seen rapid progress in the development of service robots. Universities and companies alike have launched major research efforts toward the deployment of ambitious systems designed to aid human operators performing a variety of tasks. These robots are intended to make those who may otherwise need to live in assisted care facilities more independent, to help workers perform their jobs, or simply to make life more convenient. Service robots provide a powerful platform on which to study Artificial Intelligence (AI) and Human-Robot Interaction (HRI) in the real world. Research sitting at the intersection of AI and HRI is crucial to the success of service robots if they are to fulfill their mission. This symposium seeks to highlight research enabling robots to effectively interact with people autonomously while modeling, planning, and reasoning about the environment that the robot operates in and the tasks that it must perform. AI-HRI deals with the challenge of interacting with humans in environments that are relatively unstructured or which are structured around people rather than machines, as well as the possibility that the robot may need to interact naturally with people rather than through teach pendants, programming, or similar interfaces. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 144,905 |
2501.07088 | MathReader : Text-to-Speech for Mathematical Documents | TTS (Text-to-Speech) document reader from Microsoft, Adobe, Apple, and OpenAI have been serviced worldwide. They provide relatively good TTS results for general plain text, but sometimes skip contents or provide unsatisfactory results for mathematical expressions. This is because most modern academic papers are written in LaTeX, and when LaTeX formulas are compiled, they are rendered as distinctive text forms within the document. However, traditional TTS document readers output only the text as it is recognized, without considering the mathematical meaning of the formulas. To address this issue, we propose MathReader, which effectively integrates OCR, a fine-tuned T5 model, and TTS. MathReader demonstrated a lower Word Error Rate (WER) than existing TTS document readers, such as Microsoft Edge and Adobe Acrobat, when processing documents containing mathematical formulas. MathReader reduced the WER from 0.510 to 0.281 compared to Microsoft Edge, and from 0.617 to 0.281 compared to Adobe Acrobat. This will significantly contribute to alleviating the inconvenience faced by users who want to listen to documents, especially those who are visually impaired. The code is available at https://github.com/hyeonsieun/MathReader. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 524,261 |
1502.00115 | Optimized Projection for Sparse Representation Based Classification | Dimensionality reduction (DR) methods have been commonly used as a principled way to understand the high-dimensional data such as facial images. In this paper, we propose a new supervised DR method called Optimized Projection for Sparse Representation based Classification (OP-SRC), which is based on the recent face recognition method, Sparse Representation based Classification (SRC). SRC seeks a sparse linear combination on all the training data for a given query image, and make the decision by the minimal reconstruction residual. OP-SRC is designed on the decision rule of SRC, it aims to reduce the within-class reconstruction residual and simultaneously increase the between-class reconstruction residual on the training data. The projections are optimized and match well with the mechanism of SRC. Therefore, SRC performs well in the OP-SRC transformed space. The feasibility and effectiveness of the proposed method is verified on the Yale, ORL and UMIST databases with promising results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 39,767 |
2404.16251 | Prompt Leakage effect and defense strategies for multi-turn LLM
interactions | Prompt leakage poses a compelling security and privacy threat in LLM applications. Leakage of system prompts may compromise intellectual property, and act as adversarial reconnaissance for an attacker. A systematic evaluation of prompt leakage threats and mitigation strategies is lacking, especially for multi-turn LLM interactions. In this paper, we systematically investigate LLM vulnerabilities against prompt leakage for 10 closed- and open-source LLMs, across four domains. We design a unique threat model which leverages the LLM sycophancy effect and elevates the average attack success rate (ASR) from 17.7% to 86.2% in a multi-turn setting. Our standardized setup further allows dissecting leakage of specific prompt contents such as task instructions and knowledge documents. We measure the mitigation effect of 7 black-box defense strategies, along with finetuning an open-source model to defend against leakage attempts. We present different combination of defenses against our threat model, including a cost analysis. Our study highlights key takeaways for building secure LLM applications and provides directions for research in multi-turn LLM interactions | false | false | false | false | true | false | false | false | true | false | false | false | true | false | false | false | false | false | 449,410 |
2309.12235 | Distributed Conjugate Gradient Method via Conjugate Direction Tracking | We present a distributed conjugate gradient method for distributed optimization problems, where each agent computes an optimal solution of the problem locally without any central computation or coordination, while communicating with its immediate, one-hop neighbors over a communication network. Each agent updates its local problem variable using an estimate of the average conjugate direction across the network, computed via a dynamic consensus approach. Our algorithm enables the agents to use uncoordinated step-sizes. We prove convergence of the local variable of each agent to the optimal solution of the aggregate optimization problem, without requiring decreasing step-sizes. In addition, we demonstrate the efficacy of our algorithm in distributed state estimation problems, and its robust counterparts, where we show its performance compared to existing distributed first-order optimization methods. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | true | 393,709 |
1811.00464 | A latent topic model for mining heterogenous non-randomly missing
electronic health records data | Electronic health records (EHR) are rich heterogeneous collection of patient health information, whose broad adoption provides great opportunities for systematic health data mining. However, heterogeneous EHR data types and biased ascertainment impose computational challenges. Here, we present mixEHR, an unsupervised generative model integrating collaborative filtering and latent topic models, which jointly models the discrete distributions of data observation bias and actual data using latent disease-topic distributions. We apply mixEHR on 12.8 million phenotypic observations from the MIMIC dataset, and use it to reveal latent disease topics, interpret EHR results, impute missing data, and predict mortality in intensive care units. Using both simulation and real data, we show that mixEHR outperforms previous methods and reveals meaningful multi-disease insights. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 112,105 |
1908.04725 | Learning elementary structures for 3D shape generation and matching | We propose to represent shapes as the deformation and combination of learnable elementary 3D structures, which are primitives resulting from training over a collection of shape. We demonstrate that the learned elementary 3D structures lead to clear improvements in 3D shape generation and matching. More precisely, we present two complementary approaches for learning elementary structures: (i) patch deformation learning and (ii) point translation learning. Both approaches can be extended to abstract structures of higher dimensions for improved results. We evaluate our method on two tasks: reconstructing ShapeNet objects and estimating dense correspondences between human scans (FAUST inter challenge). We show 16% improvement over surface deformation approaches for shape reconstruction and outperform FAUST inter challenge state of the art by 6%. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 141,554 |
2412.04678 | Unsupervised Segmentation by Diffusing, Walking and Cutting | We propose an unsupervised image segmentation method using features from pre-trained text-to-image diffusion models. Inspired by classic spectral clustering approaches, we construct adjacency matrices from self-attention layers between image patches and recursively partition using Normalised Cuts. A key insight is that self-attention probability distributions, which capture semantic relations between patches, can be interpreted as a transition matrix for random walks across the image. We leverage this by first using Random Walk Normalized Cuts directly on these self-attention activations to partition the image, minimizing transition probabilities between clusters while maximizing coherence within clusters. Applied recursively, this yields a hierarchical segmentation that reflects the rich semantics in the pre-trained attention layers, without any additional training. Next, we explore other ways to build the NCuts adjacency matrix from features, and how we can use the random walk interpretation of self-attention to capture long-range relationships. Finally, we propose an approach to automatically determine the NCut cost criterion, avoiding the need to tune this manually. We quantitatively analyse the effect incorporating different features, a constant versus dynamic NCut threshold, and incorporating multi-node paths when constructing the NCuts adjacency matrix. We show that our approach surpasses all existing methods for zero-shot unsupervised segmentation, achieving state-of-the-art results on COCO-Stuff-27 and Cityscapes. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 514,513 |
2204.04462 | A3CLNN: Spatial, Spectral and Multiscale Attention ConvLSTM Neural
Network for Multisource Remote Sensing Data Classification | The problem of effectively exploiting the information multiple data sources has become a relevant but challenging research topic in remote sensing. In this paper, we propose a new approach to exploit the complementarity of two data sources: hyperspectral images (HSIs) and light detection and ranging (LiDAR) data. Specifically, we develop a new dual-channel spatial, spectral and multiscale attention convolutional long short-term memory neural network (called dual-channel A3CLNN) for feature extraction and classification of multisource remote sensing data. Spatial, spectral and multiscale attention mechanisms are first designed for HSI and LiDAR data in order to learn spectral- and spatial-enhanced feature representations, and to represent multiscale information for different classes. In the designed fusion network, a novel composite attention learning mechanism (combined with a three-level fusion strategy) is used to fully integrate the features in these two data sources. Finally, inspired by the idea of transfer learning, a novel stepwise training strategy is designed to yield a final classification result. Our experimental results, conducted on several multisource remote sensing data sets, demonstrate that the newly proposed dual-channel A3CLNN exhibits better feature representation ability (leading to more competitive classification performance) than other state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 290,667 |
2410.11870 | Post-Userist Recommender Systems : A Manifesto | We define userist recommendation as an approach to recommender systems framed solely in terms of the relation between the user and system. Post-userist recommendation posits a larger field of relations in which stakeholders are embedded and distinguishes the recommendation function (which can potentially connect creators with audiences) from generative media. We argue that in the era of generative media, userist recommendation becomes indistinguishable from personalized media generation, and therefore post-userist recommendation is the only path forward for recommender systems research. | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 498,758 |
1804.08087 | Anchor-based Nearest Class Mean Loss for Convolutional Neural Networks | Discriminative features are critical for machine learning applications. Most existing deep learning approaches, however, rely on convolutional neural networks (CNNs) for learning features, whose discriminant power is not explicitly enforced. In this paper, we propose a novel approach to train deep CNNs by imposing the intra-class compactness and the inter-class separability, so as to enhance the learned features' discriminant power. To this end, we introduce anchors, which are predefined vectors regarded as the centers for each class and fixed during training. Discriminative features are obtained by constraining the deep CNNs to map training samples to the corresponding anchors as close as possible. We propose two principles to select the anchors, and measure the proximity of two points using the Euclidean and cosine distance metric functions, which results in two novel loss functions. These loss functions require no sample pairs or triplets and can be efficiently optimized by batch stochastic gradient descent. We test the proposed method on three benchmark image classification datasets and demonstrate its promising results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 95,687 |
2310.13499 | DistillCSE: Distilled Contrastive Learning for Sentence Embeddings | This paper proposes the DistillCSE framework, which performs contrastive learning under the self-training paradigm with knowledge distillation. The potential advantage of DistillCSE is its self-enhancing feature: using a base model to provide additional supervision signals, a stronger model may be learned through knowledge distillation. However, the vanilla DistillCSE through the standard implementation of knowledge distillation only achieves marginal improvements due to severe overfitting. The further quantitative analyses demonstrate the reason that the standard knowledge distillation exhibits a relatively large variance of the teacher model's logits due to the essence of contrastive learning. To mitigate the issue induced by high variance, this paper accordingly proposed two simple yet effective solutions for knowledge distillation: a Group-P shuffling strategy as an implicit regularization and the averaging logits from multiple teacher components. Experiments on standard benchmarks demonstrate that the proposed DistillCSE outperforms many strong baseline methods and yields a new state-of-the-art performance. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 401,468 |
2009.05792 | A CNN Based Approach for the Near-Field Photometric Stereo Problem | Reconstructing the 3D shape of an object using several images under different light sources is a very challenging task, especially when realistic assumptions such as light propagation and attenuation, perspective viewing geometry and specular light reflection are considered. Many of works tackling Photometric Stereo (PS) problems often relax most of the aforementioned assumptions. Especially they ignore specular reflection and global illumination effects. In this work, we propose the first CNN based approach capable of handling these realistic assumptions in Photometric Stereo. We leverage recent improvements of deep neural networks for far-field Photometric Stereo and adapt them to near field setup. We achieve this by employing an iterative procedure for shape estimation which has two main steps. Firstly we train a per-pixel CNN to predict surface normals from reflectance samples. Secondly, we compute the depth by integrating the normal field in order to iteratively estimate light directions and attenuation which is used to compensate the input images to compute reflectance samples for the next iteration. To the best of our knowledge this is the first near-field framework which is able to accurately predict 3D shape from highly specular objects. Our method outperforms competing state-of-the-art near-field Photometric Stereo approaches on both synthetic and real experiments. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 195,428 |
2210.17170 | Efficient Document Retrieval by End-to-End Refining and Quantizing BERT
Embedding with Contrastive Product Quantization | Efficient document retrieval heavily relies on the technique of semantic hashing, which learns a binary code for every document and employs Hamming distance to evaluate document distances. However, existing semantic hashing methods are mostly established on outdated TFIDF features, which obviously do not contain lots of important semantic information about documents. Furthermore, the Hamming distance can only be equal to one of several integer values, significantly limiting its representational ability for document distances. To address these issues, in this paper, we propose to leverage BERT embeddings to perform efficient retrieval based on the product quantization technique, which will assign for every document a real-valued codeword from the codebook, instead of a binary code as in semantic hashing. Specifically, we first transform the original BERT embeddings via a learnable mapping and feed the transformed embedding into a probabilistic product quantization module to output the assigned codeword. The refining and quantizing modules can be optimized in an end-to-end manner by minimizing the probabilistic contrastive loss. A mutual information maximization based method is further proposed to improve the representativeness of codewords, so that documents can be quantized more accurately. Extensive experiments conducted on three benchmarks demonstrate that our proposed method significantly outperforms current state-of-the-art baselines. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 327,597 |
1901.04675 | EV Charging Optimization based on Day-ahead Pricing Incorporating
Consumer Behavior | With the increasing penetration of electric vehicles (EVs) into the automotive market, the electricity peak demand would increase significantly due to home-EV-charging. This paper tackles this problem by defining an 'ideal' EV consumption profile, from which a day-ahead pricing model is derived. Based on historical residential EV-use data ranging over a year, we demonstrate that the proposed optimization process results in a pricing profile that achieves a dual objective of minimizing the total electricity cost, as well as the peak aggregate system demand. Importantly, the proposed formulation is simple, and accounts for the tradeoff between consumer convenience in terms of the number of available charging slots during a day and the reduction in the total electricity cost. This technique is demonstrated to be scalable with respect to the size of the community whose EV charging demands are being optimized. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 118,642 |
2404.09633 | In-Context Translation: Towards Unifying Image Recognition, Processing,
and Generation | We propose In-Context Translation (ICT), a general learning framework to unify visual recognition (e.g., semantic segmentation), low-level image processing (e.g., denoising), and conditional image generation (e.g., edge-to-image synthesis). Thanks to unification, ICT significantly reduces the inherent inductive bias that comes with designing models for specific tasks, and it maximizes mutual enhancement across similar tasks. However, the unification across a large number of tasks is non-trivial due to various data formats and training pipelines. To this end, ICT introduces two designs. Firstly, it standardizes input-output data of different tasks into RGB image pairs, e.g., semantic segmentation data pairs an RGB image with its segmentation mask in the same RGB format. This turns different tasks into a general translation task between two RGB images. Secondly, it standardizes the training of different tasks into a general in-context learning, where "in-context" means the input comprises an example input-output pair of the target task and a query image. The learning objective is to generate the "missing" data paired with the query. The implicit translation process is thus between the query and the generated image. In experiments, ICT unifies ten vision tasks and showcases impressive performance on their respective benchmarks. Notably, ICT performs well across three major categories of computer vision tasks, while its two competitors (Painter and PromptDiffusion) are only effective in at most two of these task categories. In addition, compared to its competitors, ICT trained on only 4 RTX 3090 GPUs is shown to be more efficient and less costly in training. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 446,771 |
2211.03888 | Proceedings of Principle and practice of data and Knowledge Acquisition
Workshop 2022 (PKAW 2022) | Over the past two decades, PKAW has provided a forum for researchers and practitioners to discuss the state-of-the-arts in the area of knowledge acquisition and machine intelligence (MI, also Artificial Intelligence, AI). PKAW2022 will continue the above focus and welcome the contributions on the multi-disciplinary approach of human and big data-driven knowledge acquisition, as well as AI techniques and applications. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 329,064 |
2404.14027 | OccFeat: Self-supervised Occupancy Feature Prediction for Pretraining
BEV Segmentation Networks | We introduce a self-supervised pretraining method, called OccFeat, for camera-only Bird's-Eye-View (BEV) segmentation networks. With OccFeat, we pretrain a BEV network via occupancy prediction and feature distillation tasks. Occupancy prediction provides a 3D geometric understanding of the scene to the model. However, the geometry learned is class-agnostic. Hence, we add semantic information to the model in the 3D space through distillation from a self-supervised pretrained image foundation model. Models pretrained with our method exhibit improved BEV semantic segmentation performance, particularly in low-data scenarios. Moreover, empirical results affirm the efficacy of integrating feature distillation with 3D occupancy prediction in our pretraining approach. Repository: https://github.com/valeoai/Occfeat | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 448,540 |
2008.12091 | Limitations of Implicit Bias in Matrix Sensing: Initialization Rank
Matters | In matrix sensing, we first numerically identify the sensitivity to the initialization rank as a new limitation of the implicit bias of gradient flow. We will partially quantify this phenomenon mathematically, where we establish that the gradient flow of the empirical risk is implicitly biased towards low-rank outcomes and successfully learns the planted low-rank matrix, provided that the initialization is low-rank and within a specific "capture neighborhood". This capture neighborhood is far larger than the corresponding neighborhood in local refinement results; the former contains all models with zero training error whereas the latter is a small neighborhood of a model with zero test error. These new insights enable us to design an alternative algorithm for matrix sensing that complements the high-rank and near-zero initialization scheme which is predominant in the existing literature. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 193,481 |
2303.15324 | Can Large Language Models design a Robot? | Large Language Models can lead researchers in the design of robots. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 354,451 |
1911.00954 | Problem Dependent Reinforcement Learning Bounds Which Can Identify
Bandit Structure in MDPs | In order to make good decision under uncertainty an agent must learn from observations. To do so, two of the most common frameworks are Contextual Bandits and Markov Decision Processes (MDPs). In this paper, we study whether there exist algorithms for the more general framework (MDP) which automatically provide the best performance bounds for the specific problem at hand without user intervention and without modifying the algorithm. In particular, it is found that a very minor variant of a recently proposed reinforcement learning algorithm for MDPs already matches the best possible regret bound $\tilde O (\sqrt{SAT})$ in the dominant term if deployed on a tabular Contextual Bandit problem despite the agent being agnostic to such setting. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 151,967 |
1601.02284 | Update or Wait: How to Keep Your Data Fresh | In this work, we study how to optimally manage the freshness of information updates sent from a source node to a destination via a channel. A proper metric for data freshness at the destination is the age-of-information, or simply age, which is defined as how old the freshest received update is since the moment that this update was generated at the source node (e.g., a sensor). A reasonable update policy is the zero-wait policy, i.e., the source node submits a fresh update once the previous update is delivered and the channel becomes free, which achieves the maximum throughput and the minimum delay. Surprisingly, this zero-wait policy does not always minimize the age. This counter-intuitive phenomenon motivates us to study how to optimally control information updates to keep the data fresh and to understand when the zero-wait policy is optimal. We introduce a general age penalty function to characterize the level of dissatisfaction on data staleness and formulate the average age penalty minimization problem as a constrained semi-Markov decision problem (SMDP) with an uncountable state space. We develop efficient algorithms to find the optimal update policy among all causal policies, and establish sufficient and necessary conditions for the optimality of the zero-wait policy. Our investigation shows that the zero-wait policy is far from the optimum if (i) the age penalty function grows quickly with respect to the age, (ii) the packet transmission times over the channel are positively correlated over time, or (iii) the packet transmission times are highly random (e.g., following a heavy-tail distribution). | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 50,818 |
2401.05045 | Improved Bounds on the Number of Support Points of the
Capacity-Achieving Input for Amplitude Constrained Poisson Channels | This work considers a discrete-time Poisson noise channel with an input amplitude constraint $\mathsf{A}$ and a dark current parameter $\lambda$. It is known that the capacity-achieving distribution for this channel is discrete with finitely many points. Recently, for $\lambda=0$, a lower bound of order $\sqrt{\mathsf{A}}$ and an upper bound of order $\mathsf{A} \log^2(\mathsf{A})$ have been demonstrated on the cardinality of the support of the optimal input distribution. In this work, we improve these results in several ways. First, we provide upper and lower bounds that hold for non-zero dark current. Second, we produce a sharper upper bound with a far simpler technique. In particular, for $\lambda=0$, we sharpen the upper bound from the order of $\mathsf{A} \log^2(\mathsf{A})$ to the order of $\mathsf{A}$. Finally, some other additional information about the location of the support is provided. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 420,630 |
2203.07285 | Diversifying Content Generation for Commonsense Reasoning with Mixture
of Knowledge Graph Experts | Generative commonsense reasoning (GCR) in natural language is to reason about the commonsense while generating coherent text. Recent years have seen a surge of interest in improving the generation quality of commonsense reasoning tasks. Nevertheless, these approaches have seldom investigated diversity in the GCR tasks, which aims to generate alternative explanations for a real-world situation or predict all possible outcomes. Diversifying GCR is challenging as it expects to generate multiple outputs that are not only semantically different but also grounded in commonsense knowledge. In this paper, we propose MoKGE, a novel method that diversifies the generative reasoning by a mixture of expert (MoE) strategy on commonsense knowledge graphs (KG). A set of knowledge experts seek diverse reasoning on KG to encourage various generation outputs. Empirical experiments demonstrated that MoKGE can significantly improve the diversity while achieving on par performance on accuracy on two GCR benchmarks, based on both automatic and human evaluations. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 285,385 |
2011.11637 | Nudge Attacks on Point-Cloud DNNs | The wide adaption of 3D point-cloud data in safety-critical applications such as autonomous driving makes adversarial samples a real threat. Existing adversarial attacks on point clouds achieve high success rates but modify a large number of points, which is usually difficult to do in real-life scenarios. In this paper, we explore a family of attacks that only perturb a few points of an input point cloud, and name them nudge attacks. We demonstrate that nudge attacks can successfully flip the results of modern point-cloud DNNs. We present two variants, gradient-based and decision-based, showing their effectiveness in white-box and grey-box scenarios. Our extensive experiments show nudge attacks are effective at generating both targeted and untargeted adversarial point clouds, by changing a few points or even a single point from the entire point-cloud input. We find that with a single point we can reliably thwart predictions in 12--80% of cases, whereas 10 points allow us to further increase this to 37--95%. Finally, we discuss the possible defenses against such attacks, and explore their limitations. | false | false | false | false | true | false | true | false | false | false | false | true | true | false | false | false | false | false | 207,892 |
2006.16442 | Provable Online CP/PARAFAC Decomposition of a Structured Tensor via
Dictionary Learning | We consider the problem of factorizing a structured 3-way tensor into its constituent Canonical Polyadic (CP) factors. This decomposition, which can be viewed as a generalization of singular value decomposition (SVD) for tensors, reveals how the tensor dimensions (features) interact with each other. However, since the factors are a priori unknown, the corresponding optimization problems are inherently non-convex. The existing guaranteed algorithms which handle this non-convexity incur an irreducible error (bias), and only apply to cases where all factors have the same structure. To this end, we develop a provable algorithm for online structured tensor factorization, wherein one of the factors obeys some incoherence conditions, and the others are sparse. Specifically we show that, under some relatively mild conditions on initialization, rank, and sparsity, our algorithm recovers the factors exactly (up to scaling and permutation) at a linear rate. Complementary to our theoretical results, our synthetic and real-world data evaluations showcase superior performance compared to related techniques. Moreover, its scalability and ability to learn on-the-fly makes it suitable for real-world tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 184,811 |
1812.05206 | Design Pseudo Ground Truth with Motion Cue for Unsupervised Video Object
Segmentation | One major technique debt in video object segmentation is to label the object masks for training instances. As a result, we propose to prepare inexpensive, yet high quality pseudo ground truth corrected with motion cue for video object segmentation training. Our method conducts semantic segmentation using instance segmentation networks and, then, selects the segmented object of interest as the pseudo ground truth based on the motion information. Afterwards, the pseudo ground truth is exploited to finetune the pretrained objectness network to facilitate object segmentation in the remaining frames of the video. We show that the pseudo ground truth could effectively improve the segmentation performance. This straightforward unsupervised video object segmentation method is more efficient than existing methods. Experimental results on DAVIS and FBMS show that the proposed method outperforms state-of-the-art unsupervised segmentation methods on various benchmark datasets. And the category-agnostic pseudo ground truth has great potential to extend to multiple arbitrary object tracking. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 116,361 |
1904.09856 | Learning to Calibrate Straight Lines for Fisheye Image Rectification | This paper presents a new deep-learning based method to simultaneously calibrate the intrinsic parameters of fisheye lens and rectify the distorted images. Assuming that the distorted lines generated by fisheye projection should be straight after rectification, we propose a novel deep neural network to impose explicit geometry constraints onto processes of the fisheye lens calibration and the distorted image rectification. In addition, considering the nonlinearity of distortion distribution in fisheye images, the proposed network fully exploits multi-scale perception to equalize the rectification effects on the whole image. To train and evaluate the proposed model, we also create a new largescale dataset labeled with corresponding distortion parameters and well-annotated distorted lines. Compared with the state-of-the-art methods, our model achieves the best published rectification quality and the most accurate estimation of distortion parameters on a large set of synthetic and real fisheye images. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 128,501 |
2102.06228 | Learning Gaussian-Bernoulli RBMs using Difference of Convex Functions
Optimization | The Gaussian-Bernoulli restricted Boltzmann machine (GB-RBM) is a useful generative model that captures meaningful features from the given $n$-dimensional continuous data. The difficulties associated with learning GB-RBM are reported extensively in earlier studies. They indicate that the training of the GB-RBM using the current standard algorithms, namely, contrastive divergence (CD) and persistent contrastive divergence (PCD), needs a carefully chosen small learning rate to avoid divergence which, in turn, results in slow learning. In this work, we alleviate such difficulties by showing that the negative log-likelihood for a GB-RBM can be expressed as a difference of convex functions if we keep the variance of the conditional distribution of visible units (given hidden unit states) and the biases of the visible units, constant. Using this, we propose a stochastic {\em difference of convex functions} (DC) programming (S-DCP) algorithm for learning the GB-RBM. We present extensive empirical studies on several benchmark datasets to validate the performance of this S-DCP algorithm. It is seen that S-DCP is better than the CD and PCD algorithms in terms of speed of learning and the quality of the generative model learnt. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 219,678 |
2004.14704 | A Span-based Linearization for Constituent Trees | We propose a novel linearization of a constituent tree, together with a new locally normalized model. For each split point in a sentence, our model computes the normalizer on all spans ending with that split point, and then predicts a tree span from them. Compared with global models, our model is fast and parallelizable. Different from previous local models, our linearization method is tied on the spans directly and considers more local features when performing span prediction, which is more interpretable and effective. Experiments on PTB (95.8 F1) and CTB (92.4 F1) show that our model significantly outperforms existing local models and efficiently achieves competitive results with global models. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 174,981 |
2205.02116 | Optimizing One-pixel Black-box Adversarial Attacks | The output of Deep Neural Networks (DNN) can be altered by a small perturbation of the input in a black box setting by making multiple calls to the DNN. However, the high computation and time required makes the existing approaches unusable. This work seeks to improve the One-pixel (few-pixel) black-box adversarial attacks to reduce the number of calls to the network under attack. The One-pixel attack uses a non-gradient optimization algorithm to find pixel-level perturbations under the constraint of a fixed number of pixels, which causes the network to predict the wrong label for a given image. We show through experimental results how the choice of the optimization algorithm and initial positions to search can reduce function calls and increase attack success significantly, making the attack more practical in real-world settings. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 294,849 |
1806.07155 | Semi-supervised Hashing for Semi-Paired Cross-View Retrieval | Recently, hashing techniques have gained importance in large-scale retrieval tasks because of their retrieval speed. Most of the existing cross-view frameworks assume that data are well paired. However, the fully-paired multiview situation is not universal in real applications. The aim of the method proposed in this paper is to learn the hashing function for semi-paired cross-view retrieval tasks. To utilize the label information of partial data, we propose a semi-supervised hashing learning framework which jointly performs feature extraction and classifier learning. The experimental results on two datasets show that our method outperforms several state-of-the-art methods in terms of retrieval accuracy. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 100,854 |
2207.07483 | A Systematic Review and Replicability Study of BERT4Rec for Sequential
Recommendation | BERT4Rec is an effective model for sequential recommendation based on the Transformer architecture. In the original publication, BERT4Rec claimed superiority over other available sequential recommendation approaches (e.g. SASRec), and it is now frequently being used as a state-of-the art baseline for sequential recommendations. However, not all subsequent publications confirmed this result and proposed other models that were shown to outperform BERT4Rec in effectiveness. In this paper we systematically review all publications that compare BERT4Rec with another popular Transformer-based model, namely SASRec, and show that BERT4Rec results are not consistent within these publications. To understand the reasons behind this inconsistency, we analyse the available implementations of BERT4Rec and show that we fail to reproduce results of the original BERT4Rec publication when using their default configuration parameters. However, we are able to replicate the reported results with the original code if training for a much longer amount of time (up to 30x) compared to the default configuration. We also propose our own implementation of BERT4Rec based on the Hugging Face Transformers library, which we demonstrate replicates the originally reported results on 3 out 4 datasets, while requiring up to 95% less training time to converge. Overall, from our systematic review and detailed experiments, we conclude that BERT4Rec does indeed exhibit state-of-the-art effectiveness for sequential recommendation, but only when trained for a sufficient amount of time. Additionally, we show that our implementation can further benefit from adapting other Transformer architectures that are available in the Hugging Face Transformers library (e.g. using disentangled attention, as provided by DeBERTa, or larger hidden layer size cf. ALBERT). | false | false | false | false | true | true | true | false | false | false | false | false | false | false | false | false | false | false | 308,213 |
2010.03222 | Unsupervised Evaluation for Question Answering with Transformers | It is challenging to automatically evaluate the answer of a QA model at inference time. Although many models provide confidence scores, and simple heuristics can go a long way towards indicating answer correctness, such measures are heavily dataset-dependent and are unlikely to generalize. In this work, we begin by investigating the hidden representations of questions, answers, and contexts in transformer-based QA architectures. We observe a consistent pattern in the answer representations, which we show can be used to automatically evaluate whether or not a predicted answer span is correct. Our method does not require any labeled data and outperforms strong heuristic baselines, across 2 datasets and 7 domains. We are able to predict whether or not a model's answer is correct with 91.37% accuracy on SQuAD, and 80.7% accuracy on SubjQA. We expect that this method will have broad applications, e.g., in the semi-automatic development of QA datasets | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 199,330 |
1710.09289 | Automated cardiovascular magnetic resonance image analysis with fully
convolutional networks | Cardiovascular magnetic resonance (CMR) imaging is a standard imaging modality for assessing cardiovascular diseases (CVDs), the leading cause of death globally. CMR enables accurate quantification of the cardiac chamber volume, ejection fraction and myocardial mass, providing information for diagnosis and monitoring of CVDs. However, for years, clinicians have been relying on manual approaches for CMR image analysis, which is time consuming and prone to subjective errors. It is a major clinical challenge to automatically derive quantitative and clinically relevant information from CMR images. Deep neural networks have shown a great potential in image pattern recognition and segmentation for a variety of tasks. Here we demonstrate an automated analysis method for CMR images, which is based on a fully convolutional network (FCN). The network is trained and evaluated on a large-scale dataset from the UK Biobank, consisting of 4,875 subjects with 93,500 pixelwise annotated images. The performance of the method has been evaluated using a number of technical metrics, including the Dice metric, mean contour distance and Hausdorff distance, as well as clinically relevant measures, including left ventricle (LV) end-diastolic volume (LVEDV) and end-systolic volume (LVESV), LV mass (LVM); right ventricle (RV) end-diastolic volume (RVEDV) and end-systolic volume (RVESV). By combining FCN with a large-scale annotated dataset, the proposed automated method achieves a high performance on par with human experts in segmenting the LV and RV on short-axis CMR images and the left atrium (LA) and right atrium (RA) on long-axis CMR images. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 83,184 |
1203.3477 | A Scalable Method for Solving High-Dimensional Continuous POMDPs Using
Local Approximation | Partially-Observable Markov Decision Processes (POMDPs) are typically solved by finding an approximate global solution to a corresponding belief-MDP. In this paper, we offer a new planning algorithm for POMDPs with continuous state, action and observation spaces. Since such domains have an inherent notion of locality, we can find an approximate solution using local optimization methods. We parameterize the belief distribution as a Gaussian mixture, and use the Extended Kalman Filter (EKF) to approximate the belief update. Since the EKF is a first-order filter, we can marginalize over the observations analytically. By using feedback control and state estimation during policy execution, we recover a behavior that is effectively conditioned on incoming observations despite the unconditioned planning. Local optimization provides no guarantees of global optimality, but it allows us to tackle domains that are at least an order of magnitude larger than the current state-of-the-art. We demonstrate the scalability of our algorithm by considering a simulated hand-eye coordination domain with 16 continuous state dimensions and 6 continuous action dimensions. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 14,925 |
2103.11370 | Online Convex Optimization with Continuous Switching Constraint | In many sequential decision making applications, the change of decision would bring an additional cost, such as the wear-and-tear cost associated with changing server status. To control the switching cost, we introduce the problem of online convex optimization with continuous switching constraint, where the goal is to achieve a small regret given a budget on the \emph{overall} switching cost. We first investigate the hardness of the problem, and provide a lower bound of order $\Omega(\sqrt{T})$ when the switching cost budget $S=\Omega(\sqrt{T})$, and $\Omega(\min\{\frac{T}{S},T\})$ when $S=O(\sqrt{T})$, where $T$ is the time horizon. The essential idea is to carefully design an adaptive adversary, who can adjust the loss function according to the cumulative switching cost of the player incurred so far based on the orthogonal technique. We then develop a simple gradient-based algorithm which enjoys the minimax optimal regret bound. Finally, we show that, for strongly convex functions, the regret bound can be improved to $O(\log T)$ for $S=\Omega(\log T)$, and $O(\min\{T/\exp(S)+S,T\})$ for $S=O(\log T)$. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 225,784 |
2306.08329 | Research on an improved Conformer end-to-end Speech Recognition Model
with R-Drop Structure | To address the issue of poor generalization ability in end-to-end speech recognition models within deep learning, this study proposes a new Conformer-based speech recognition model called "Conformer-R" that incorporates the R-drop structure. This model combines the Conformer model, which has shown promising results in speech recognition, with the R-drop structure. By doing so, the model is able to effectively model both local and global speech information while also reducing overfitting through the use of the R-drop structure. This enhances the model's ability to generalize and improves overall recognition efficiency. The model was first pre-trained on the Aishell1 and Wenetspeech datasets for general domain adaptation, and subsequently fine-tuned on computer-related audio data. Comparison tests with classic models such as LAS and Wenet were performed on the same test set, demonstrating the Conformer-R model's ability to effectively improve generalization. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 373,377 |
2110.06416 | MMIU: Dataset for Visual Intent Understanding in Multimodal Assistants | In multimodal assistant, where vision is also one of the input modalities, the identification of user intent becomes a challenging task as visual input can influence the outcome. Current digital assistants take spoken input and try to determine the user intent from conversational or device context. So, a dataset, which includes visual input (i.e. images or videos for the corresponding questions targeted for multimodal assistant use cases, is not readily available. The research in visual question answering (VQA) and visual question generation (VQG) is a great step forward. However, they do not capture questions that a visually-abled person would ask multimodal assistants. Moreover, many times questions do not seek information from external knowledge. In this paper, we provide a new dataset, MMIU (MultiModal Intent Understanding), that contains questions and corresponding intents provided by human annotators while looking at images. We, then, use this dataset for intent classification task in multimodal digital assistant. We also experiment with various approaches for combining vision and language features including the use of multimodal transformer for classification of image-question pairs into 14 intents. We provide the benchmark results and discuss the role of visual and text features for the intent classification task on our dataset. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 260,609 |
2005.06879 | Solve Traveling Salesman Problem by Monte Carlo Tree Search and Deep
Neural Network | We present a self-learning approach that combines deep reinforcement learning and Monte Carlo tree search to solve the traveling salesman problem. The proposed approach has two advantages. First, it adopts deep reinforcement learning to compute the value functions for decision, which removes the need of hand-crafted features and labelled data. Second, it uses Monte Carlo tree search to select the best policy by comparing different value functions, which increases its generalization ability. Experimental results show that the proposed method performs favorably against other methods in small-to-medium problem settings. And it shows comparable performance as state-of-the-art in large problem setting. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 177,132 |
1909.00437 | Evaluating the Cross-Lingual Effectiveness of Massively Multilingual
Neural Machine Translation | The recently proposed massively multilingual neural machine translation (NMT) system has been shown to be capable of translating over 100 languages to and from English within a single model. Its improved translation performance on low resource languages hints at potential cross-lingual transfer capability for downstream tasks. In this paper, we evaluate the cross-lingual effectiveness of representations from the encoder of a massively multilingual NMT model on 5 downstream classification and sequence labeling tasks covering a diverse set of over 50 languages. We compare against a strong baseline, multilingual BERT (mBERT), in different cross-lingual transfer learning scenarios and show gains in zero-shot transfer in 4 out of these 5 tasks. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 143,635 |
2404.09574 | Predicting and Analyzing Pedestrian Crossing Behavior at Unsignalized
Crossings | Understanding and predicting pedestrian crossing behavior is essential for enhancing automated driving and improving driving safety. Predicting gap selection behavior and the use of zebra crossing enables driving systems to proactively respond and prevent potential conflicts. This task is particularly challenging at unsignalized crossings due to the ambiguous right of way, requiring pedestrians to constantly interact with vehicles and other pedestrians. This study addresses these challenges by utilizing simulator data to investigate scenarios involving multiple vehicles and pedestrians. We propose and evaluate machine learning models to predict gap selection in non-zebra scenarios and zebra crossing usage in zebra scenarios. We investigate and discuss how pedestrians' behaviors are influenced by various factors, including pedestrian waiting time, walking speed, the number of unused gaps, the largest missed gap, and the influence of other pedestrians. This research contributes to the evolution of intelligent vehicles by providing predictive models and valuable insights into pedestrian crossing behavior. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 446,742 |
2408.17157 | Optimizing Traversal Queries of Sensor Data Using a Rule-Based
Reachability Approach | Link Traversal queries face challenges in completeness and long execution time due to the size of the web. Reachability criteria define completeness by restricting the links followed by engines. However, the number of links to dereference remains the bottleneck of the approach. Web environments often have structures exploitable by query engines to prune irrelevant sources. Current criteria rely on using information from the query definition and predefined predicate. However, it is difficult to use them to traverse environments where logical expressions indicate the location of resources. We propose to use a rule-based reachability criterion that captures logical statements expressed in hypermedia descriptions within linked data documents to prune irrelevant sources. In this poster paper, we show how the Comunica link traversal engine is modified to take hints from a hypermedia control vocabulary, to prune irrelevant sources. Our preliminary findings show that by using this strategy, the query engine can significantly reduce the number of HTTP requests and the query execution time without sacrificing the completeness of results. Our work shows that the investigation of hypermedia controls in link pruning of traversal queries is a worthy effort for optimizing web queries of unindexed decentralized databases. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 484,600 |
2308.06862 | Effect of Choosing Loss Function when Using T-batching for
Representation Learning on Dynamic Networks | Representation learning methods have revolutionized machine learning on networks by converting discrete network structures into continuous domains. However, dynamic networks that evolve over time pose new challenges. To address this, dynamic representation learning methods have gained attention, offering benefits like reduced learning time and improved accuracy by utilizing temporal information. T-batching is a valuable technique for training dynamic network models that reduces training time while preserving vital conditions for accurate modeling. However, we have identified a limitation in the training loss function used with t-batching. Through mathematical analysis, we propose two alternative loss functions that overcome these issues, resulting in enhanced training performance. We extensively evaluate the proposed loss functions on synthetic and real-world dynamic networks. The results consistently demonstrate superior performance compared to the original loss function. Notably, in a real-world network characterized by diverse user interaction histories, the proposed loss functions achieved more than 26.9% enhancement in Mean Reciprocal Rank (MRR) and more than 11.8% improvement in Recall@10. These findings underscore the efficacy of the proposed loss functions in dynamic network modeling. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 385,296 |
2102.05210 | D2A U-Net: Automatic Segmentation of COVID-19 Lesions from CT Slices
with Dilated Convolution and Dual Attention Mechanism | Coronavirus Disease 2019 (COVID-19) has caused great casualties and becomes almost the most urgent public health events worldwide. Computed tomography (CT) is a significant screening tool for COVID-19 infection, and automated segmentation of lung infection in COVID-19 CT images will greatly assist diagnosis and health care of patients. However, accurate and automatic segmentation of COVID-19 lung infections remains to be challenging. In this paper we propose a dilated dual attention U-Net (D2A U-Net) for COVID-19 lesion segmentation in CT slices based on dilated convolution and a novel dual attention mechanism to address the issues above. We introduce a dilated convolution module in model decoder to achieve large receptive field, which refines decoding process and contributes to segmentation accuracy. Also, we present a dual attention mechanism composed of two attention modules which are inserted to skip connection and model decoder respectively. The dual attention mechanism is utilized to refine feature maps and reduce semantic gap between different levels of the model. The proposed method has been evaluated on open-source dataset and outperforms cutting edges methods in semantic segmentation. Our proposed D2A U-Net with pretrained encoder achieves a Dice score of 0.7298 and recall score of 0.7071. Besides, we also build a simplified D2A U-Net without pretrained encoder to provide a fair comparison with other models trained from scratch, which still outperforms popular U-Net family models with a Dice score of 0.7047 and recall score of 0.6626. Our experiment results have shown that by introducing dilated convolution and dual attention mechanism, the number of false positives is significantly reduced, which improves sensitivity to COVID-19 lesions and subsequently brings significant increase to Dice score. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 219,356 |
1812.06307 | Generative adversarial networks for generation and classification of
physical rehabilitation movement episodes | This article proposes a method for mathematical modeling of human movements related to patient exercise episodes performed during physical therapy sessions by using artificial neural networks. The generative adversarial network structure is adopted, whereby a discriminative and a generative model are trained concurrently in an adversarial manner. Different network architectures are examined, with the discriminative and generative models structured as deep subnetworks of hidden layers comprised of convolutional or recurrent computational units. The models are validated on a data set of human movements recorded with an optical motion tracker. The results demonstrate an ability of the networks for classification of new instances of motions, and for generation of motion examples that resemble the recorded motion sequences. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 116,585 |
1904.02383 | Artificial Neural Network Modeling for Path Loss Prediction in Urban
Environments | Although various linear log-distance path loss models have been developed, advanced models are requiring to more accurately and flexibly represent the path loss for complex environments such as the urban area. This letter proposes an artificial neural network (ANN) based multi-dimensional regression framework for path loss modeling in urban environments at 3 to 6 GHz frequency band. ANN is used to learn the path loss structure from the measured path loss data which is a function of distance and frequency. The effect of the network architecture parameter (activation function, the number of hidden layers and nodes) on the prediction accuracy are analyzed. We observe that the proposed model is more accurate and flexible compared to the conventional linear model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 126,417 |
2104.00426 | WakaVT: A Sequential Variational Transformer for Waka Generation | Poetry generation has long been a challenge for artificial intelligence. In the scope of Japanese poetry generation, many researchers have paid attention to Haiku generation, but few have focused on Waka generation. To further explore the creative potential of natural language generation systems in Japanese poetry creation, we propose a novel Waka generation model, WakaVT, which automatically produces Waka poems given user-specified keywords. Firstly, an additive mask-based approach is presented to satisfy the form constraint. Secondly, the structures of Transformer and variational autoencoder are integrated to enhance the quality of generated content. Specifically, to obtain novelty and diversity, WakaVT employs a sequence of latent variables, which effectively captures word-level variability in Waka data. To improve linguistic quality in terms of fluency, coherence, and meaningfulness, we further propose the fused multilevel self-attention mechanism, which properly models the hierarchical linguistic structure of Waka. To the best of our knowledge, we are the first to investigate Waka generation with models based on Transformer and/or variational autoencoder. Both objective and subjective evaluation results demonstrate that our model outperforms baselines significantly. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 227,997 |
2305.10736 | Counterfactual Debiasing for Generating Factually Consistent Text
Summaries | Despite substantial progress in abstractive text summarization to generate fluent and informative texts, the factual inconsistency in the generated summaries remains an important yet challenging problem to be solved. In this paper, we construct causal graphs for abstractive text summarization and identify the intrinsic causes of the factual inconsistency, i.e., the language bias and irrelevancy bias, and further propose a debiasing framework, named CoFactSum, to alleviate the causal effects of these biases by counterfactual estimation. Specifically, the proposed CoFactSum provides two counterfactual estimation strategies, i.e., Explicit Counterfactual Masking with an explicit dynamic masking strategy, and Implicit Counterfactual Training with an implicit discriminative cross-attention mechanism. Meanwhile, we design a Debiasing Degree Adjustment mechanism to dynamically adapt the debiasing degree at each decoding step. Extensive experiments on two widely-used summarization datasets demonstrate the effectiveness of CoFactSum in enhancing the factual consistency of generated summaries compared with several baselines. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 365,208 |
1310.4849 | On the Bayes-optimality of F-measure maximizers | The F-measure, which has originally been introduced in information retrieval, is nowadays routinely used as a performance metric for problems such as binary classification, multi-label classification, and structured output prediction. Optimizing this measure is a statistically and computationally challenging problem, since no closed-form solution exists. Adopting a decision-theoretic perspective, this article provides a formal and experimental analysis of different approaches for maximizing the F-measure. We start with a Bayes-risk analysis of related loss functions, such as Hamming loss and subset zero-one loss, showing that optimizing such losses as a surrogate of the F-measure leads to a high worst-case regret. Subsequently, we perform a similar type of analysis for F-measure maximizing algorithms, showing that such algorithms are approximate, while relying on additional assumptions regarding the statistical distribution of the binary response variables. Furthermore, we present a new algorithm which is not only computationally efficient but also Bayes-optimal, regardless of the underlying distribution. To this end, the algorithm requires only a quadratic (with respect to the number of binary responses) number of parameters of the joint distribution. We illustrate the practical performance of all analyzed methods by means of experiments with multi-label classification problems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 27,843 |
1903.06261 | Graph Hierarchical Convolutional Recurrent Neural Network (GHCRNN) for
Vehicle Condition Prediction | The prediction of urban vehicle flow and speed can greatly facilitate people's travel, and also can provide reasonable advice for the decision-making of relevant government departments. However, due to the spatial, temporal and hierarchy of vehicle flow and many influencing factors such as weather, it is difficult to prediction. Most of the existing research methods are to extract spatial structure information on the road network and extract time series information from the historical data. However, when extracting spatial features, these methods have higher time and space complexity, and incorporate a lot of noise. It is difficult to apply on large graphs, and only considers the influence of surrounding connected road nodes on the central node, ignoring a very important hierarchical relationship, namely, similar information of similar node features and road network structures. In response to these problems, this paper proposes the Graph Hierarchical Convolutional Recurrent Neural Network (GHCRNN) model. The model uses GCN (Graph Convolutional Networks) to extract spatial feature, GRU (Gated Recurrent Units) to extract temporal feature, and uses the learnable Pooling to extract hierarchical information, eliminate redundant information and reduce complexity. Applying this model to the vehicle flow and speed data of Shenzhen and Los Angeles has been well verified, and the time and memory consumption are effectively reduced under the compared precision. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 124,338 |
2010.05171 | fairseq S2T: Fast Speech-to-Text Modeling with fairseq | We introduce fairseq S2T, a fairseq extension for speech-to-text (S2T) modeling tasks such as end-to-end speech recognition and speech-to-text translation. It follows fairseq's careful design for scalability and extensibility. We provide end-to-end workflows from data pre-processing, model training to offline (online) inference. We implement state-of-the-art RNN-based, Transformer-based as well as Conformer-based models and open-source detailed training recipes. Fairseq's machine translation models and language models can be seamlessly integrated into S2T workflows for multi-task learning or transfer learning. Fairseq S2T documentation and examples are available at https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 200,015 |
2404.19015 | Simple-RF: Regularizing Sparse Input Radiance Fields with Simpler
Solutions | Neural Radiance Fields (NeRF) show impressive performance in photo-realistic free-view rendering of scenes. Recent improvements on the NeRF such as TensoRF and ZipNeRF employ explicit models for faster optimization and rendering, as compared to the NeRF that employs an implicit representation. However, both implicit and explicit radiance fields require dense sampling of images in the given scene. Their performance degrades significantly when only a sparse set of views is available. Researchers find that supervising the depth estimated by a radiance field helps train it effectively with fewer views. The depth supervision is obtained either using classical approaches or neural networks pre-trained on a large dataset. While the former may provide only sparse supervision, the latter may suffer from generalization issues. As opposed to the earlier approaches, we seek to learn the depth supervision by designing augmented models and training them along with the main radiance field. Further, we aim to design a framework of regularizations that can work across different implicit and explicit radiance fields. We observe that certain features of these radiance field models overfit to the observed images in the sparse-input scenario. Our key finding is that reducing the capability of the radiance fields with respect to positional encoding, the number of decomposed tensor components or the size of the hash table, constrains the model to learn simpler solutions, which estimate better depth in certain regions. By designing augmented models based on such reduced capabilities, we obtain better depth supervision for the main radiance field. We achieve state-of-the-art view-synthesis performance with sparse input views on popular datasets containing forward-facing and 360$^\circ$ scenes by employing the above regularizations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 450,472 |
2005.02161 | LambdaNet: Probabilistic Type Inference using Graph Neural Networks | As gradual typing becomes increasingly popular in languages like Python and TypeScript, there is a growing need to infer type annotations automatically. While type annotations help with tasks like code completion and static error catching, these annotations cannot be fully determined by compilers and are tedious to annotate by hand. This paper proposes a probabilistic type inference scheme for TypeScript based on a graph neural network. Our approach first uses lightweight source code analysis to generate a program abstraction called a type dependency graph, which links type variables with logical constraints as well as name and usage information. Given this program abstraction, we then use a graph neural network to propagate information between related type variables and eventually make type predictions. Our neural architecture can predict both standard types, like number or string, as well as user-defined types that have not been encountered during training. Our experimental results show that our approach outperforms prior work in this space by $14\%$ (absolute) on library types, while having the ability to make type predictions that are out of scope for existing techniques. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 175,793 |
2305.19049 | Space MIMO: Direct Unmodified Handheld to Multi-Satellite Communication | This paper examines the uplink transmission of a single-antenna handsheld user to a cluster of satellites, with a focus on utilizing the inter-satellite links to enable cooperative signal detection. Two cases are studied: one with full CSI and the other with partial CSI between satellites. The two cases are compared in terms of capacity, overhead, and bit error rate. Additionally, the impact of channel estimation error is analyzed in both designs, and robust detection techniques are proposed to handle channel uncertainty up to a certain level. The performance of each case is demonstrated, and a comparison is made with conventional satellite communication schemes where only one satellite can connect to a user. The results of our study reveal that the proposed constellation with a total of 3168 satellites in orbit can enable a capacity of 800 Mbits/sec through cooperation of $12$ satellites with and occupied bandwidth of 500 MHz. In contrast, conventional satellite communication approaches with the same system parameters yield a significantly lower capacity of less than 150 Mbits/sec for the nearest satellite. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 369,355 |
2401.16519 | Extending the kinematic theory of rapid movements with new primitives | The Kinematic Theory of rapid movements, and its associated Sigma-Lognormal, model 2D spatiotemporal trajectories. It is constructed mainly as a temporal overlap of curves between virtual target points. Specifically, it uses an arc and a lognormal as primitives for the representation of the trajectory and velocity, respectively. This paper proposes developing this model, in what we call the Kinematic Theory Transform, which establishes a mathematical framework that allows further primitives to be used. Mainly, we evaluate Euler curves to link virtual target points and Gaussian, Beta, Gamma, Double-bounded lognormal, and Generalized Extreme Value functions to model the bell-shaped velocity profile. Using these primitives, we report reconstruction results with spatiotemporal trajectories executed by human beings, animals, and anthropomorphic robots. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 424,861 |
1710.08107 | Probabilistic Pursuits on Graphs | We consider discrete dynamical systems of "ant-like" agents engaged in a sequence of pursuits on a graph environment. The agents emerge one by one at equal time intervals from a source vertex $s$ and pursue each other by greedily attempting to close the distance to their immediate predecessor, the agent that emerged just before them from $s$, until they arrive at the destination point $t$. Such pursuits have been investigated before in the continuous setting and in discrete time when the underlying environment is a regular grid. In both these settings the agents' walks provably converge to a shortest path from $s$ to $t$. Furthermore, assuming a certain natural probability distribution over the move choices of the agents on the grid (in case there are multiple shortest paths between an agent and its predecessor), the walks converge to the uniform distribution over all shortest paths from $s$ to $t$. We study the evolution of agent walks over a general finite graph environment $G$. Our model is a natural generalization of the pursuit rule proposed for the case of the grid. The main results are as follows. We show that "convergence" to the shortest paths in the sense of previous work extends to all pseudo-modular graphs (i.e. graphs in which every three pairwise intersecting disks have a nonempty intersection), and also to environments obtained by taking graph products, generalizing previous results in two different ways. We show that convergence to the shortest paths is also obtained by chordal graphs, and discuss some further positive and negative results for planar graphs. In the most general case, convergence to the shortest paths is not guaranteed, and the agents may get stuck on sets of recurrent, non-optimal walks from $s$ to $t$. However, we show that the limiting distributions of the agents' walks will always be uniform distributions over some set of walks of equal length. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 83,038 |
2311.16446 | Centre Stage: Centricity-based Audio-Visual Temporal Action Detection | Previous one-stage action detection approaches have modelled temporal dependencies using only the visual modality. In this paper, we explore different strategies to incorporate the audio modality, using multi-scale cross-attention to fuse the two modalities. We also demonstrate the correlation between the distance from the timestep to the action centre and the accuracy of the predicted boundaries. Thus, we propose a novel network head to estimate the closeness of timesteps to the action centre, which we call the centricity score. This leads to increased confidence for proposals that exhibit more precise boundaries. Our method can be integrated with other one-stage anchor-free architectures and we demonstrate this on three recent baselines on the EPIC-Kitchens-100 action detection benchmark where we achieve state-of-the-art performance. Detailed ablation studies showcase the benefits of fusing audio and our proposed centricity scores. Code and models for our proposed method are publicly available at https://github.com/hanielwang/Audio-Visual-TAD.git | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 410,887 |
2203.05178 | An Audio-Visual Attention Based Multimodal Network for Fake Talking Face
Videos Detection | DeepFake based digital facial forgery is threatening the public media security, especially when lip manipulation has been used in talking face generation, the difficulty of fake video detection is further improved. By only changing lip shape to match the given speech, the facial features of identity is hard to be discriminated in such fake talking face videos. Together with the lack of attention on audio stream as the prior knowledge, the detection failure of fake talking face generation also becomes inevitable. Inspired by the decision-making mechanism of human multisensory perception system, which enables the auditory information to enhance post-sensory visual evidence for informed decisions output, in this study, a fake talking face detection framework FTFDNet is proposed by incorporating audio and visual representation to achieve more accurate fake talking face videos detection. Furthermore, an audio-visual attention mechanism (AVAM) is proposed to discover more informative features, which can be seamlessly integrated into any audio-visual CNN architectures by modularization. With the additional AVAM, the proposed FTFDNet is able to achieve a better detection performance on the established dataset (FTFDD). The evaluation of the proposed work has shown an excellent performance on the detection of fake talking face videos, which is able to arrive at a detection rate above 97%. | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 284,732 |
2404.08582 | FashionFail: Addressing Failure Cases in Fashion Object Detection and
Segmentation | In the realm of fashion object detection and segmentation for online shopping images, existing state-of-the-art fashion parsing models encounter limitations, particularly when exposed to non-model-worn apparel and close-up shots. To address these failures, we introduce FashionFail; a new fashion dataset with e-commerce images for object detection and segmentation. The dataset is efficiently curated using our novel annotation tool that leverages recent foundation models. The primary objective of FashionFail is to serve as a test bed for evaluating the robustness of models. Our analysis reveals the shortcomings of leading models, such as Attribute-Mask R-CNN and Fashionformer. Additionally, we propose a baseline approach using naive data augmentation to mitigate common failure cases and improve model robustness. Through this work, we aim to inspire and support further research in fashion item detection and segmentation for industrial applications. The dataset, annotation tool, code, and models are available at \url{https://rizavelioglu.github.io/fashionfail/}. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 446,300 |
1705.01861 | Action Tubelet Detector for Spatio-Temporal Action Localization | Current state-of-the-art approaches for spatio-temporal action localization rely on detections at the frame level that are then linked or tracked across time. In this paper, we leverage the temporal continuity of videos instead of operating at the frame level. We propose the ACtion Tubelet detector (ACT-detector) that takes as input a sequence of frames and outputs tubelets, i.e., sequences of bounding boxes with associated scores. The same way state-of-the-art object detectors rely on anchor boxes, our ACT-detector is based on anchor cuboids. We build upon the SSD framework. Convolutional features are extracted for each frame, while scores and regressions are based on the temporal stacking of these features, thus exploiting information from a sequence. Our experimental results show that leveraging sequences of frames significantly improves detection performance over using individual frames. The gain of our tubelet detector can be explained by both more accurate scores and more precise localization. Our ACT-detector outperforms the state-of-the-art methods for frame-mAP and video-mAP on the J-HMDB and UCF-101 datasets, in particular at high overlap thresholds. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 72,894 |
2208.09169 | Cost and efficiency requirements for a successful electricity storage in
a highly renewable European energy system | Future highly renewable energy systems might require substantial storage deployment. At the current stage, the technology portfolio of dominant storage options is limited to pumped-hydro storage and Li-Ion batteries. It is uncertain which storage design will be able to compete with these options. Considering Europe as a case study, we derive the cost and efficiency requirements of a generic storage technology, which we refer to as storage-X, to be deployed in the cost-optimal system. This is performed while including existing pumped-hydro facilities and accounting for the competition from stationary Li-ion batteries, flexible generation technology, and flexible demand in a highly renewable sector-coupled energy system. Based on a sample space of 724 storage configurations, we show that energy capacity cost and discharge efficiency largely determine the optimal storage deployment, in agreement with previous studies. Here, we show that charge capacity cost is also important due to its impact on renewable curtailment. A significant deployment of storage-X in a cost-optimal system requires (a) discharge efficiency of at least 95%, (b) discharge efficiency of at least 50% together with low energy capacity cost (10EUR/kWh), or (c) discharge efficiency of at least 25% with very low energy capacity cost (2EUR/kWh). Comparing our findings with seven emerging technologies reveals that none of them fulfill these requirements. Thermal Energy Storage (TES) is, however, on the verge of qualifying due to its low energy capacity cost and concurrent low charge capacity cost. Exploring the space of storage designs reveals that system cost reduction from storage-X deployment can reach 9% at its best, but this requires high round-trip efficiency (90%) and low charge capacity cost (35EUR/kW). | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 313,612 |
1710.10248 | Tensor network language model | We propose a new statistical model suitable for machine learning of systems with long distance correlations such as natural languages. The model is based on directed acyclic graph decorated by multi-linear tensor maps in the vertices and vector spaces in the edges, called tensor network. Such tensor networks have been previously employed for effective numerical computation of the renormalization group flow on the space of effective quantum field theories and lattice models of statistical mechanics. We provide explicit algebro-geometric analysis of the parameter moduli space for tree graphs, discuss model properties and applications such as statistical translation. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | 83,337 |
2110.05165 | Exchangeability-Aware Sum-Product Networks | Sum-Product Networks (SPNs) are expressive probabilistic models that provide exact, tractable inference. They achieve this efficiency by making use of local independence. On the other hand, mixtures of exchangeable variable models (MEVMs) are a class of tractable probabilistic models that make use of exchangeability of discrete random variables to render inference tractable. Exchangeability, which arises naturally in relational domains, has not been considered for efficient representation and inference in SPNs yet. The contribution of this paper is a novel probabilistic model which we call Exchangeability-Aware Sum-Product Networks (XSPNs). It contains both SPNs and MEVMs as special cases, and combines the ability of SPNs to efficiently learn deep probabilistic models with the ability of MEVMs to efficiently handle exchangeable random variables. We introduce a structure learning algorithm for XSPNs and empirically show that they can be more accurate than conventional SPNs when the data contains repeated, interchangeable parts. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 260,186 |
2411.11044 | Efficient Federated Unlearning with Adaptive Differential Privacy
Preservation | Federated unlearning (FU) offers a promising solution to effectively address the need to erase the impact of specific clients' data on the global model in federated learning (FL), thereby granting individuals the ``Right to be Forgotten". The most straightforward approach to achieve unlearning is to train the model from scratch, excluding clients who request data removal, but it is resource-intensive. Current state-of-the-art FU methods extend traditional FL frameworks by leveraging stored historical updates, enabling more efficient unlearning than training from scratch. However, the use of stored updates introduces significant privacy risks. Adversaries with access to these updates can potentially reconstruct clients' local data, a well-known vulnerability in the privacy domain. While privacy-enhanced techniques exist, their applications to FU scenarios that balance unlearning efficiency with privacy protection remain underexplored. To address this gap, we propose FedADP, a method designed to achieve both efficiency and privacy preservation in FU. Our approach incorporates an adaptive differential privacy (DP) mechanism, carefully balancing privacy and unlearning performance through a novel budget allocation strategy tailored for FU. FedADP also employs a dual-layered selection process, focusing on global models with significant changes and client updates closely aligned with the global model, reducing storage and communication costs. Additionally, a novel calibration method is introduced to facilitate effective unlearning. Extensive experimental results demonstrate that FedADP effectively manages the trade-off between unlearning efficiency and privacy protection. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 508,898 |
2205.09191 | High-Order Multilinear Discriminant Analysis via Order-$\textit{n}$
Tensor Eigendecomposition | Higher-order data with high dimensionality is of immense importance in many areas of machine learning, computer vision, and video analytics. Multidimensional arrays (commonly referred to as tensors) are used for arranging higher-order data structures while keeping the natural representation of the data samples. In the past decade, great efforts have been made to extend the classic linear discriminant analysis for higher-order data classification generally referred to as multilinear discriminant analysis (MDA). Most of the existing approaches are based on the Tucker decomposition and $\textit{n}$-mode tensor-matrix products. The current paper presents a new approach to tensor-based multilinear discriminant analysis referred to as High-Order Multilinear Discriminant Analysis (HOMLDA). This approach is based upon the tensor decomposition where an order-$\textit{n}$ tensor can be written as a product of order-$\textit{n}$ tensors and has a natural extension to traditional linear discriminant analysis (LDA). Furthermore, the resulting framework, HOMLDA, might produce a within-class scatter tensor that is close to singular. Thus, computing the inverse inaccurately may distort the discriminant analysis. To address this problem, an improved method referred to as Robust High-Order Multilinear Discriminant Analysis (RHOMLDA) is introduced. Experimental results on multiple data sets illustrate that our proposed approach provides improved classification performance with respect to the current Tucker decomposition-based supervised learning methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 297,180 |
1603.06121 | Buried object detection using handheld WEMI with task-driven extended
functions of multiple instances | Many effective supervised discriminative dictionary learning methods have been developed in the literature. However, when training these algorithms, precise ground-truth of the training data is required to provide very accurate point-wise labels. Yet, in many applications, accurate labels are not always feasible. This is especially true in the case of buried object detection in which the size of the objects are not consistent. In this paper, a new multiple instance dictionary learning algorithm for detecting buried objects using a handheld WEMI sensor is detailed. The new algorithm, Task Driven Extended Functions of Multiple Instances, can overcome data that does not have very precise point-wise labels and still learn a highly discriminative dictionary. Results are presented and discussed on measured WEMI data. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 53,444 |
2002.10131 | Modeling Aggression Propagation on Social Media | Cyberaggression has been studied in various contexts and online social platforms, and modeled on different data using state-of-the-art machine and deep learning algorithms to enable automatic detection and blocking of this behavior. Users can be influenced to act aggressively or even bully others because of elevated toxicity and aggression in their own (online) social circle. In effect, this behavior can propagate from one user and neighborhood to another, and therefore, spread in the network. Interestingly, to our knowledge, no work has modeled the network dynamics of aggressive behavior. In this paper, we take a first step towards this direction by studying propagation of aggression on social media using opinion dynamics. We propose ways to model how aggression may propagate from one user to another, depending on how each user is connected to other aggressive or regular users. Through extensive simulations on Twitter data, we study how aggressive behavior could propagate in the network. We validate our models with crawled and annotated ground truth data, reaching up to 80% AUC, and discuss the results and implications of our work. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 165,301 |
1608.05470 | Dual User Selection for Security Enhancement in Uplink Multiuser Systems | This letter proposes a novel dual user selection scheme for uplink transmission with multiple users, where a jamming user and a served user are jointly selected to improve the secrecy performance. Specifically, the jamming user transmits jamming signal with a certain rate, so that the base station (BS) can decode the jamming signal before detecting the secret information signal. By carefully selecting the jamming user and the served user, it makes the eavesdropper decode the jamming signal with a low probability meanwhile the BS can achieve a high receive signal-to-noise ratio (SNR). Therefore, the uplink transmission achieves the dual secrecy improvement by jamming and scheduling. It is shown that the proposed scheme can significantly improve the security of the uplink transmission. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 59,980 |
1702.02223 | Comparison of machine learning methods for classifying mediastinal lymph
node metastasis of non-small cell lung cancer from 18F-FDG PET/CT images | The present study shows that the performance of CNN is not significantly different from the best classical methods and human doctors for classifying mediastinal lymph node metastasis of NSCLC from PET/CT images. Because CNN does not need tumor segmentation or feature calculation, it is more convenient and more objective than the classical methods. However, CNN does not make use of the import diagnostic features, which have been proved more discriminative than the texture features for classifying small-sized lymph nodes. Therefore, incorporating the diagnostic features into CNN is a promising direction for future research. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 67,945 |
2110.11316 | CLOOB: Modern Hopfield Networks with InfoLOOB Outperform CLIP | CLIP yielded impressive results on zero-shot transfer learning tasks and is considered as a foundation model like BERT or GPT3. CLIP vision models that have a rich representation are pre-trained using the InfoNCE objective and natural language supervision before they are fine-tuned on particular tasks. Though CLIP excels at zero-shot transfer learning, it suffers from an explaining away problem, that is, it focuses on one or few features, while neglecting other relevant features. This problem is caused by insufficiently extracting the covariance structure in the original multi-modal data. We suggest to use modern Hopfield networks to tackle the problem of explaining away. Their retrieved embeddings have an enriched covariance structure derived from co-occurrences of features in the stored embeddings. However, modern Hopfield networks increase the saturation effect of the InfoNCE objective which hampers learning. We propose to use the InfoLOOB objective to mitigate this saturation effect. We introduce the novel "Contrastive Leave One Out Boost" (CLOOB), which uses modern Hopfield networks for covariance enrichment together with the InfoLOOB objective. In experiments we compare CLOOB to CLIP after pre-training on the Conceptual Captions and the YFCC dataset with respect to their zero-shot transfer learning performance on other datasets. CLOOB consistently outperforms CLIP at zero-shot transfer learning across all considered architectures and datasets. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 262,440 |
2311.17978 | AutArch: An AI-assisted workflow for object detection and automated
recording in archaeological catalogues | The context of this paper is the creation of large uniform archaeological datasets from heterogeneous published resources, such as find catalogues - with the help of AI and Big Data. The paper is concerned with the challenge of consistent assemblages of archaeological data. We cannot simply combine existing records, as they differ in terms of quality and recording standards. Thus, records have to be recreated from published archaeological illustrations. This is only a viable path with the help of automation. The contribution of this paper is a new workflow for collecting data from archaeological find catalogues available as legacy resources, such as archaeological drawings and photographs in large unsorted PDF files; the workflow relies on custom software (AutArch) supporting image processing, object detection, and interactive means of validating and adjusting automatically retrieved data. We integrate artificial intelligence (AI) in terms of neural networks for object detection and classification into the workflow, thereby speeding up, automating, and standardising data collection. Objects commonly found in archaeological catalogues - such as graves, skeletons, ceramics, ornaments, stone tools and maps - are detected. Those objects are spatially related and analysed to extract real-life attributes, such as the size and orientation of graves based on the north arrow and the scale. We also automate recording of geometric whole-outlines through contour detection, as an alternative to landmark-based geometric morphometrics. Detected objects, contours, and other automatically retrieved data can be manually validated and adjusted. We use third millennium BC Europe (encompassing cultures such as 'Corded Ware' and 'Bell Beaker', and their burial practices) as a 'testing ground' and for evaluation purposes; this includes a user study for the workflow and the AutArch software. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 411,512 |
2112.03103 | FastWARC: Optimizing Large-Scale Web Archive Analytics | Web search and other large-scale web data analytics rely on processing archives of web pages stored in a standardized and efficient format. Since its introduction in 2008, the IIPC's Web ARCive (WARC) format has become the standard format for this purpose. As a list of individually compressed records of HTTP requests and responses, it allows for constant-time random access to all kinds of web data via off-the-shelf open source parsers in many programming languages, such as WARCIO, the de-facto standard for Python. When processing web archives at the terabyte or petabyte scale, however, even small inefficiencies in these tools add up quickly, resulting in hours, days, or even weeks of wasted compute time. Reviewing the basic components of WARCIO and analyzing its bottlenecks, we proceed to build FastWARC, a new high-performance WARC processing library for Python, written in C++/Cython, which yields performance improvements by factors of 1.6-8x. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 270,086 |
1207.4074 | An analytical comparison of coalescent-based multilocus methods: The
three-taxon case | Incomplete lineage sorting (ILS) is a common source of gene tree incongruence in multilocus analyses. A large number of methods have been developed to infer species trees in the presence of ILS. Here we provide a mathematical analysis of several coalescent-based methods. Our analysis is performed on a three-taxon species tree and assumes that the gene trees are correctly reconstructed along with their branch lengths. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 17,529 |
2306.17038 | Comparison of Single- and Multi- Objective Optimization Quality for
Evolutionary Equation Discovery | Evolutionary differential equation discovery proved to be a tool to obtain equations with less a priori assumptions than conventional approaches, such as sparse symbolic regression over the complete possible terms library. The equation discovery field contains two independent directions. The first one is purely mathematical and concerns differentiation, the object of optimization and its relation to the functional spaces and others. The second one is dedicated purely to the optimizational problem statement. Both topics are worth investigating to improve the algorithm's ability to handle experimental data a more artificial intelligence way, without significant pre-processing and a priori knowledge of their nature. In the paper, we consider the prevalence of either single-objective optimization, which considers only the discrepancy between selected terms in the equation, or multi-objective optimization, which additionally takes into account the complexity of the obtained equation. The proposed comparison approach is shown on classical model examples -- Burgers equation, wave equation, and Korteweg - de Vries equation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 376,570 |
1704.01788 | A Survey of Skyline Query Processing | Living in the Information Age allows almost everyone have access to a large amount of information and options to choose from in order to fulfill their needs. In many cases, the amount of information available and the rate of change may hide the optimal and truly desired solution. This reveals the need of a mechanism that will highlight the best options to choose among every possible scenario. Based on this the skyline query was proposed which is a decision support mechanism, that retrieves the valuefor- money options of a dataset by identifying the objects that present the optimal combination of the characteristics of the dataset. This paper surveys the state-of-the-art techniques for skyline query processing, the numerous variations of the initial algorithm that were proposed to solve similar problems and the application-specific approaches that were developed to provide a solution efficiently in each case. Aditionally in each section a taxonomy is outlined along with the key aspects of each algorithm and its relation to previous studies. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 71,325 |
2305.06691 | Correcting One Error in Non-Binary Channels with Feedback | In this paper, the problem of correction of a single error in $q$-ary symmetric channel with noiseless feedback is considered. We propose an algorithm to construct codes with feedback inductively. For all prime power $q$ we prove that two instances of feedback are sufficient to transmit over the $q$-ary symmetric channel the same number of messages as in the case of complete feedback. Our other contribution is the construction of codes with one-time feedback with the same parameters as Hamming codes for $q$ that is not a prime power. We also construct single-error-correcting codes with one-time feedback of size $q^{n-2}$ for arbitrary $q$ and $n\leq q+1$, which can be seen as an analog for Reed-Solomon codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 363,634 |
2411.11906 | $\text{S}^{3}$Mamba: Arbitrary-Scale Super-Resolution via Scaleable
State Space Model | Arbitrary scale super-resolution (ASSR) aims to super-resolve low-resolution images to high-resolution images at any scale using a single model, addressing the limitations of traditional super-resolution methods that are restricted to fixed-scale factors (e.g., $\times2$, $\times4$). The advent of Implicit Neural Representations (INR) has brought forth a plethora of novel methodologies for ASSR, which facilitate the reconstruction of original continuous signals by modeling a continuous representation space for coordinates and pixel values, thereby enabling arbitrary-scale super-resolution. Consequently, the primary objective of ASSR is to construct a continuous representation space derived from low-resolution inputs. However, existing methods, primarily based on CNNs and Transformers, face significant challenges such as high computational complexity and inadequate modeling of long-range dependencies, which hinder their effectiveness in real-world applications. To overcome these limitations, we propose a novel arbitrary-scale super-resolution method, called $\text{S}^{3}$Mamba, to construct a scalable continuous representation space. Specifically, we propose a Scalable State Space Model (SSSM) to modulate the state transition matrix and the sampling matrix of step size during the discretization process, achieving scalable and continuous representation modeling with linear computational complexity. Additionally, we propose a novel scale-aware self-attention mechanism to further enhance the network's ability to perceive global important features at different scales, thereby building the $\text{S}^{3}$Mamba to achieve superior arbitrary-scale super-resolution. Extensive experiments on both synthetic and real-world benchmarks demonstrate that our method achieves state-of-the-art performance and superior generalization capabilities at arbitrary super-resolution scales. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 509,220 |
2302.07510 | Best Arm Identification for Stochastic Rising Bandits | Stochastic Rising Bandits (SRBs) model sequential decision-making problems in which the expected reward of the available options increases every time they are selected. This setting captures a wide range of scenarios in which the available options are learning entities whose performance improves (in expectation) over time (e.g., online best model selection). While previous works addressed the regret minimization problem, this paper focuses on the fixed-budget Best Arm Identification (BAI) problem for SRBs. In this scenario, given a fixed budget of rounds, we are asked to provide a recommendation about the best option at the end of the identification process. We propose two algorithms to tackle the above-mentioned setting, namely R-UCBE, which resorts to a UCB-like approach, and R-SR, which employs a successive reject procedure. Then, we prove that, with a sufficiently large budget, they provide guarantees on the probability of properly identifying the optimal option at the end of the learning process and on the simple regret. Furthermore, we derive a lower bound on the error probability, matched by our R-SR (up to constants), and illustrate how the need for a sufficiently large budget is unavoidable in the SRB setting. Finally, we numerically validate the proposed algorithms in both synthetic and realistic environments. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 345,753 |
1808.07185 | Keyphrase Generation with Correlation Constraints | In this paper, we study automatic keyphrase generation. Although conventional approaches to this task show promising results, they neglect correlation among keyphrases, resulting in duplication and coverage issues. To solve these problems, we propose a new sequence-to-sequence architecture for keyphrase generation named CorrRNN, which captures correlation among multiple keyphrases in two ways. First, we employ a coverage vector to indicate whether the word in the source document has been summarized by previous phrases to improve the coverage for keyphrases. Second, preceding phrases are taken into account to eliminate duplicate phrases and improve result coherence. Experiment results show that our model significantly outperforms the state-of-the-art method on benchmark datasets in terms of both accuracy and diversity. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 105,682 |
2305.10747 | Strong Structural Controllability of Structured Networks with MIMO node
systems | The article addresses the problem of strong structural controllability of structured networks with multi-input multi-output (MIMO) node systems. The authors first present necessary and sufficient conditions for strong structural controllability, which involve both algebraic and graph-theoretic aspects. These conditions are computationally expensive, especially for large-scale networks with high-dimensional state spaces. To overcome this computational complexity, we propose a necessary algebraic condition from a node system's perspective and a graph-theoretic condition from a network topology's perspective. The latter condition is derived from the structured interconnection laws and employs a new color change rule, namely weakly color change rule introduced in this paper. Overall, this article contributes to the study of strong structural controllability in structured networks with MIMO node systems, providing both theoretical and practical insights for their analysis and design. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 365,213 |
2401.09819 | PPNet: A Two-Stage Neural Network for End-to-end Path Planning | The classical path planners, such as sampling-based path planners, can provide probabilistic completeness guarantees in the sense that the probability that the planner fails to return a solution if one exists, decays to zero as the number of samples approaches infinity. However, finding a near-optimal feasible solution in a given period is challenging in many applications such as the autonomous vehicle. To achieve an end-to-end near-optimal path planner, we first divide the path planning problem into two subproblems, which are path space segmentation and waypoints generation in the given path's space. We further propose a two-stage neural network named Path Planning Network (PPNet) each stage solves one of the subproblems abovementioned. Moreover, we propose a novel efficient data generation method for path planning named EDaGe-PP. EDaGe-PP can generate data with continuous-curvature paths with analytical expression while satisfying the clearance requirement. The results show the total computation time of generating random 2D path planning data is less than 1/33 and the success rate of PPNet trained by the dataset that is generated by EDaGe-PP is about 2 times compared to other methods. We validate PPNet against state-of-the-art path planning methods. The results show that PPNet can find a near-optimal solution in 15.3ms, which is much shorter than the state-of-the-art path planners. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 422,397 |
1910.14258 | Towards a Predictive Patent Analytics and Evaluation Platform | The importance of patents is well recognised across many regions of the world. Many patent mining systems have been proposed, but with limited predictive capabilities. In this demo, we showcase how predictive algorithms leveraging the state-of-the-art machine learning and deep learning techniques can be used to improve understanding of patents for inventors, patent evaluators, and business analysts alike. Our demo video is available at http://ibm.biz/ecml2019-demo-patent-analytics | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 151,606 |
2401.02160 | Human-in-the-Loop Policy Optimization for Preference-Based
Multi-Objective Reinforcement Learning | Multi-objective reinforcement learning (MORL) aims to find a set of high-performing and diverse policies that address trade-offs between multiple conflicting objectives. However, in practice, decision makers (DMs) often deploy only one or a limited number of trade-off policies. Providing too many diversified trade-off policies to the DM not only significantly increases their workload but also introduces noise in multi-criterion decision-making. With this in mind, we propose a human-in-the-loop policy optimization framework for preference-based MORL that interactively identifies policies of interest. Our method proactively learns the DM's implicit preference information without requiring any a priori knowledge, which is often unavailable in real-world black-box decision scenarios. The learned preference information is used to progressively guide policy optimization towards policies of interest. We evaluate our approach against three conventional MORL algorithms that do not consider preference information and four state-of-the-art preference-based MORL algorithms on two MORL environments for robot control and smart grid management. Experimental results fully demonstrate the effectiveness of our proposed method in comparison to the other peer algorithms. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 419,627 |
2208.04268 | Label-Free Synthetic Pretraining of Object Detectors | We propose a new approach, Synthetic Optimized Layout with Instance Detection (SOLID), to pretrain object detectors with synthetic images. Our "SOLID" approach consists of two main components: (1) generating synthetic images using a collection of unlabelled 3D models with optimized scene arrangement; (2) pretraining an object detector on "instance detection" task - given a query image depicting an object, detecting all instances of the exact same object in a target image. Our approach does not need any semantic labels for pretraining and allows the use of arbitrary, diverse 3D models. Experiments on COCO show that with optimized data generation and a proper pretraining task, synthetic data can be highly effective data for pretraining object detectors. In particular, pretraining on rendered images achieves performance competitive with pretraining on real images while using significantly less computing resources. Code is available at https://github.com/princeton-vl/SOLID. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 312,051 |
1610.09077 | Integrating Topic Models and Latent Factors for Recommendation | Nowadays, we have large amounts of online items in various web-based applications, which makes it an important task to build effective personalized recommender systems so as to save users' efforts in information seeking. One of the most extensively and successfully used methods for personalized recommendation is the Collaborative Filtering (CF) technique, which makes recommendation based on users' historical choices as well as those of the others'. The most popular CF method, like Latent Factor Model (LFM), is to model how users evaluate items by understanding the hidden dimension or factors of their opinions. How to model these hidden factors is key to improve the performance of recommender system. In this work, we consider the problem of hotel recommendation for travel planning services by integrating the location information and the user's preference for recommendation. The intuition is that user preferences may change dynamically over different locations, thus treating the historical decisions of a user as static or universally applicable can be infeasible in real-world applications. For example, users may prefer chain brand hotels with standard configurations when traveling for business, while they may prefer unique local hotels when traveling for entertainment. In this paper, we aim to provide trip-level personalization for users in recommendation. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 62,999 |
1909.04697 | When Single Event Upset Meets Deep Neural Networks: Observations,
Explorations, and Remedies | Deep Neural Network has proved its potential in various perception tasks and hence become an appealing option for interpretation and data processing in security sensitive systems. However, security-sensitive systems demand not only high perception performance, but also design robustness under various circumstances. Unlike prior works that study network robustness from software level, we investigate from hardware perspective about the impact of Single Event Upset (SEU) induced parameter perturbation (SIPP) on neural networks. We systematically define the fault models of SEU and then provide the definition of sensitivity to SIPP as the robustness measure for the network. We are then able to analytically explore the weakness of a network and summarize the key findings for the impact of SIPP on different types of bits in a floating point parameter, layer-wise robustness within the same network and impact of network depth. Based on those findings, we propose two remedy solutions to protect DNNs from SIPPs, which can mitigate accuracy degradation from 28% to 0.27% for ResNet with merely 0.24-bit SRAM area overhead per parameter. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 144,867 |
2502.07579 | Single-Step Consistent Diffusion Samplers | Sampling from unnormalized target distributions is a fundamental yet challenging task in machine learning and statistics. Existing sampling algorithms typically require many iterative steps to produce high-quality samples, leading to high computational costs that limit their practicality in time-sensitive or resource-constrained settings. In this work, we introduce consistent diffusion samplers, a new class of samplers designed to generate high-fidelity samples in a single step. We first develop a distillation algorithm to train a consistent diffusion sampler from a pretrained diffusion model without pre-collecting large datasets of samples. Our algorithm leverages incomplete sampling trajectories and noisy intermediate states directly from the diffusion process. We further propose a method to train a consistent diffusion sampler from scratch, fully amortizing exploration by training a single model that both performs diffusion sampling and skips intermediate steps using a self-consistency loss. Through extensive experiments on a variety of unnormalized distributions, we show that our approach yields high-fidelity samples using less than 1% of the network evaluations required by traditional diffusion samplers. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 532,669 |
1705.10887 | Efficient, sparse representation of manifold distance matrices for
classical scaling | Geodesic distance matrices can reveal shape properties that are largely invariant to non-rigid deformations, and thus are often used to analyze and represent 3-D shapes. However, these matrices grow quadratically with the number of points. Thus for large point sets it is common to use a low-rank approximation to the distance matrix, which fits in memory and can be efficiently analyzed using methods such as multidimensional scaling (MDS). In this paper we present a novel sparse method for efficiently representing geodesic distance matrices using biharmonic interpolation. This method exploits knowledge of the data manifold to learn a sparse interpolation operator that approximates distances using a subset of points. We show that our method is 2x faster and uses 20x less memory than current leading methods for solving MDS on large point sets, with similar quality. This enables analyses of large point sets that were previously infeasible. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 74,489 |
1905.12966 | Quantifying consensus of rankings based on q-support patterns | Rankings, representing preferences over a set of candidates, are widely used in many information systems, e.g., group decision making and information retrieval. It is of great importance to evaluate the consensus of the obtained rankings from multiple agents. An overall measure of the consensus degree provides an insight into the ranking data. Moreover, it could provide a quantitative indicator for consensus comparison between groups and further improvement of a ranking system. Existing studies are insufficient in assessing the overall consensus of a ranking set. They did not provide an evaluation of the consensus degree of preference patterns in most rankings. In this paper, a novel consensus quantifying approach, without the need for any correlation or distance functions as in existing studies of consensus, is proposed based on a concept of q-support patterns of rankings. The q-support patterns represent the commonality embedded in a set of rankings. A method for detecting outliers in a set of rankings is naturally derived from the proposed consensus quantifying approach. Experimental studies are conducted to demonstrate the effectiveness of the proposed approach. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 132,955 |
2010.07410 | Six Attributes of Unhealthy Conversation | We present a new dataset of approximately 44000 comments labeled by crowdworkers. Each comment is labelled as either 'healthy' or 'unhealthy', in addition to binary labels for the presence of six potentially 'unhealthy' sub-attributes: (1) hostile; (2) antagonistic, insulting, provocative or trolling; (3) dismissive; (4) condescending or patronising; (5) sarcastic; and/or (6) an unfair generalisation. Each label also has an associated confidence score. We argue that there is a need for datasets which enable research based on a broad notion of 'unhealthy online conversation'. We build this typology to encompass a substantial proportion of the individual comments which contribute to unhealthy online conversation. For some of these attributes, this is the first publicly available dataset of this scale. We explore the quality of the dataset, present some summary statistics and initial models to illustrate the utility of this data, and highlight limitations and directions for further research. | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 200,798 |
1807.04040 | Learning Singularity Avoidance | With the increase in complexity of robotic systems and the rise in non-expert users, it can be assumed that task constraints are not explicitly known. In tasks where avoiding singularity is critical to its success, this paper provides an approach, especially for non-expert users, for the system to learn the constraints contained in a set of demonstrations, such that they can be used to optimise an autonomous controller to avoid singularity, without having to explicitly know the task constraints. The proposed approach avoids singularity, and thereby unpredictable behaviour when carrying out a task, by maximising the learnt manipulability throughout the motion of the constrained system, and is not limited to kinematic systems. Its benefits are demonstrated through comparisons with other control policies which show that the constrained manipulability of a system learnt through demonstration can be used to avoid singularities in cases where these other policies would fail. In the absence of the systems manipulability subject to a tasks constraints, the proposed approach can be used instead to infer these with results showing errors less than 10^-5 in 3DOF simulated systems as well as 10^-2 using a 7DOF real world robotic system. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 102,654 |
2001.09678 | A Robust Real-Time Computing-based Environment Sensing System for
Intelligent Vehicle | For intelligent vehicles, sensing the 3D environment is the first but crucial step. In this paper, we build a real-time advanced driver assistance system based on a low-power mobile platform. The system is a real-time multi-scheme integrated innovation system, which combines stereo matching algorithm with machine learning based obstacle detection approach and takes advantage of the distributed computing technology of a mobile platform with GPU and CPUs. First of all, a multi-scale fast MPV (Multi-Path-Viterbi) stereo matching algorithm is proposed, which can generate robust and accurate disparity map. Then a machine learning, which is based on fusion technology of monocular and binocular, is applied to detect the obstacles. We also advance an automatic fast calibration mechanism based on Zhang's calibration method. Finally, the distributed computing and reasonable data flow programming are applied to ensure the operational efficiency of the system. The experimental results show that the system can achieve robust and accurate real-time environment perception for intelligent vehicles, which can be directly used in the commercial real-time intelligent driving applications. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 161,637 |
2106.00421 | OpenBox: A Generalized Black-box Optimization Service | Black-box optimization (BBO) has a broad range of applications, including automatic machine learning, engineering, physics, and experimental design. However, it remains a challenge for users to apply BBO methods to their problems at hand with existing software packages, in terms of applicability, performance, and efficiency. In this paper, we build OpenBox, an open-source and general-purpose BBO service with improved usability. The modular design behind OpenBox also facilitates flexible abstraction and optimization of basic BBO components that are common in other existing systems. OpenBox is distributed, fault-tolerant, and scalable. To improve efficiency, OpenBox further utilizes "algorithm agnostic" parallelization and transfer learning. Our experimental results demonstrate the effectiveness and efficiency of OpenBox compared to existing systems. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 238,117 |
1206.1492 | Ordinary Search Engine Users Carrying Out Complex Search Tasks | Web search engines have become the dominant tools for finding information on the Internet. Due to their popularity, users apply them to a wide range of search needs, from simple look-ups to rather complex information tasks. This paper presents the results of a study to investigate the characteristics of these complex information needs in the context of Web search engines. The aim of the study is to find out more about (1) what makes complex search tasks distinct from simple tasks and if it is possible to find simple measures for describing their complexity, (2) if search success for a task can be predicted by means of unique measures, and (3) if successful searchers show a different behavior than unsuccessful ones. The study includes 60 people who carried out a set of 12 search tasks with current commercial search engines. Their behavior was logged with the Search-Logger tool. The results confirm that complex tasks show significantly different characteristics than simple tasks. Yet it seems to be difficult to distinguish successful from unsuccessful search behaviors. Good searchers can be differentiated from bad searchers by means of measurable parameters. The implications of these findings for search engine vendors are discussed. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 16,375 |
2501.19184 | A Survey on Class-Agnostic Counting: Advancements from Reference-Based
to Open-World Text-Guided Approaches | Visual object counting has recently shifted towards class-agnostic counting (CAC), which addresses the challenge of counting objects across arbitrary categories -- a crucial capability for flexible and generalizable counting systems. Unlike humans, who effortlessly identify and count objects from diverse categories without prior knowledge, most existing counting methods are restricted to enumerating instances of known classes, requiring extensive labeled datasets for training and struggling in open-vocabulary settings. In contrast, CAC aims to count objects belonging to classes never seen during training, operating in a few-shot setting. In this paper, we present the first comprehensive review of CAC methodologies. We propose a taxonomy to categorize CAC approaches into three paradigms based on how target object classes can be specified: reference-based, reference-less, and open-world text-guided. Reference-based approaches achieve state-of-the-art performance by relying on exemplar-guided mechanisms. Reference-less methods eliminate exemplar dependency by leveraging inherent image patterns. Finally, open-world text-guided methods use vision-language models, enabling object class descriptions via textual prompts, offering a flexible and promising solution. Based on this taxonomy, we provide an overview of the architectures of 29 CAC approaches and report their results on gold-standard benchmarks. We compare their performance and discuss their strengths and limitations. Specifically, we present results on the FSC-147 dataset, setting a leaderboard using gold-standard metrics, and on the CARPK dataset to assess generalization capabilities. Finally, we offer a critical discussion of persistent challenges, such as annotation dependency and generalization, alongside future directions. We believe this survey will be a valuable resource, showcasing CAC advancements and guiding future research. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 529,049 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.