id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2203.00809 | Instance-aware multi-object self-supervision for monocular depth
prediction | This paper proposes a self-supervised monocular image-to-depth prediction framework that is trained with an end-to-end photometric loss that handles not only 6-DOF camera motion but also 6-DOF moving object instances. Self-supervision is performed by warping the images across a video sequence using depth and scene motion including object instances. One novelty of the proposed method is the use of the multi-head attention of the transformer network that matches moving objects across time and models their interaction and dynamics. This enables accurate and robust pose estimation for each object instance. Most image-to-depth predication frameworks make the assumption of rigid scenes, which largely degrades their performance with respect to dynamic objects. Only a few SOTA papers have accounted for dynamic objects. The proposed method is shown to outperform these methods on standard benchmarks and the impact of the dynamic motion on these benchmarks is exposed. Furthermore, the proposed image-to-depth prediction framework is also shown to be competitive with SOTA video-to-depth prediction frameworks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 283,126 |
1810.06749 | Optimally rotated coordinate systems for adaptive least-squares
regression on sparse grids | For low-dimensional data sets with a large amount of data points, standard kernel methods are usually not feasible for regression anymore. Besides simple linear models or involved heuristic deep learning models, grid-based discretizations of larger (kernel) model classes lead to algorithms, which naturally scale linearly in the amount of data points. For moderate-dimensional or high-dimensional regression tasks, these grid-based discretizations suffer from the curse of dimensionality. Here, sparse grid methods have proven to circumvent this problem to a large extent. In this context, space- and dimension-adaptive sparse grids, which can detect and exploit a given low effective dimensionality of nominally high-dimensional data, are particularly successful. They nevertheless rely on an axis-aligned structure of the solution and exhibit issues for data with predominantly skewed and rotated coordinates. In this paper we propose a preprocessing approach for these adaptive sparse grid algorithms that determines an optimized, problem-dependent coordinate system and, thus, reduces the effective dimensionality of a given data set in the ANOVA sense. We provide numerical examples on synthetic data as well as real-world data to show how an adaptive sparse grid least squares algorithm benefits from our preprocessing method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 110,494 |
2108.12516 | Few-Shot Table-to-Text Generation with Prototype Memory | Neural table-to-text generation models have achieved remarkable progress on an array of tasks. However, due to the data-hungry nature of neural models, their performances strongly rely on large-scale training examples, limiting their applicability in real-world applications. To address this, we propose a new framework: Prototype-to-Generate (P2G), for table-to-text generation under the few-shot scenario. The proposed framework utilizes the retrieved prototypes, which are jointly selected by an IR system and a novel prototype selector to help the model bridging the structural gap between tables and texts. Experimental results on three benchmark datasets with three state-of-the-art models demonstrate that the proposed framework significantly improves the model performance across various evaluation metrics. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 252,515 |
1703.06554 | Object category understanding via eye fixations on freehand sketches | The study of eye gaze fixations on photographic images is an active research area. In contrast, the image subcategory of freehand sketches has not received as much attention for such studies. In this paper, we analyze the results of a free-viewing gaze fixation study conducted on 3904 freehand sketches distributed across 160 object categories. Our analysis shows that fixation sequences exhibit marked consistency within a sketch, across sketches of a category and even across suitably grouped sets of categories. This multi-level consistency is remarkable given the variability in depiction and extreme image content sparsity that characterizes hand-drawn object sketches. In our paper, we show that the multi-level consistency in the fixation data can be exploited to (a) predict a test sketch's category given only its fixation sequence and (b) build a computational model which predicts part-labels underlying fixations on objects. We hope that our findings motivate the community to deem sketch-like representations worthy of gaze-based studies vis-a-vis photographic images. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 70,248 |
1209.6342 | Sparse Ising Models with Covariates | There has been a lot of work fitting Ising models to multivariate binary data in order to understand the conditional dependency relationships between the variables. However, additional covariates are frequently recorded together with the binary data, and may influence the dependence relationships. Motivated by such a dataset on genomic instability collected from tumor samples of several types, we propose a sparse covariate dependent Ising model to study both the conditional dependency within the binary data and its relationship with the additional covariates. This results in subject-specific Ising models, where the subject's covariates influence the strength of association between the genes. As in all exploratory data analysis, interpretability of results is important, and we use L1 penalties to induce sparsity in the fitted graphs and in the number of selected covariates. Two algorithms to fit the model are proposed and compared on a set of simulated data, and asymptotic results are established. The results on the tumor dataset and their biological significance are discussed in detail. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 18,807 |
1605.04063 | A construction of $q$-ary linear codes with two weights | Linear codes with a few weights are very important in coding theory and have attracted a lot of attention. In this paper, we present a construction of $q$-ary linear codes from trace and norm functions over finite fields. The weight distributions of the linear codes are determined in some cases based on Gauss sums. It is interesting that our construction can produce optimal or almost optimal codes. Furthermore, we show that our codes can be used to construct secret sharing schemes with interesting access structures and strongly regular graphs with new parameters. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 55,823 |
2206.02914 | Training Subset Selection for Weak Supervision | Existing weak supervision approaches use all the data covered by weak signals to train a classifier. We show both theoretically and empirically that this is not always optimal. Intuitively, there is a tradeoff between the amount of weakly-labeled data and the precision of the weak labels. We explore this tradeoff by combining pretrained data representations with the cut statistic (Muhlenbach et al., 2004) to select (hopefully) high-quality subsets of the weakly-labeled training data. Subset selection applies to any label model and classifier and is very simple to plug in to existing weak supervision pipelines, requiring just a few lines of code. We show our subset selection method improves the performance of weak supervision for a wide range of label models, classifiers, and datasets. Using less weakly-labeled data improves the accuracy of weak supervision pipelines by up to 19% (absolute) on benchmark tasks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 301,075 |
2109.12040 | From images in the wild to video-informed image classification | Image classifiers work effectively when applied on structured images, yet they often fail when applied on images with very high visual complexity. This paper describes experiments applying state-of-the-art object classifiers toward a unique set of images in the wild with high visual complexity collected on the island of Bali. The text describes differences between actual images in the wild and images from Imagenet, and then discusses a novel approach combining informational cues particular to video with an ensemble of imperfect classifiers in order to improve classification results on video sourced images of plants in the wild. | false | false | false | false | false | false | true | false | false | false | false | true | false | true | false | false | false | false | 257,137 |
2410.19635 | Frozen-DETR: Enhancing DETR with Image Understanding from Frozen
Foundation Models | Recent vision foundation models can extract universal representations and show impressive abilities in various tasks. However, their application on object detection is largely overlooked, especially without fine-tuning them. In this work, we show that frozen foundation models can be a versatile feature enhancer, even though they are not pre-trained for object detection. Specifically, we explore directly transferring the high-level image understanding of foundation models to detectors in the following two ways. First, the class token in foundation models provides an in-depth understanding of the complex scene, which facilitates decoding object queries in the detector's decoder by providing a compact context. Additionally, the patch tokens in foundation models can enrich the features in the detector's encoder by providing semantic details. Utilizing frozen foundation models as plug-and-play modules rather than the commonly used backbone can significantly enhance the detector's performance while preventing the problems caused by the architecture discrepancy between the detector's backbone and the foundation model. With such a novel paradigm, we boost the SOTA query-based detector DINO from 49.0% AP to 51.9% AP (+2.9% AP) and further to 53.8% AP (+4.8% AP) by integrating one or two foundation models respectively, on the COCO validation set after training for 12 epochs with R50 as the detector's backbone. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 502,394 |
1704.00077 | Geodesic Distance Histogram Feature for Video Segmentation | This paper proposes a geodesic-distance-based feature that encodes global information for improved video segmentation algorithms. The feature is a joint histogram of intensity and geodesic distances, where the geodesic distances are computed as the shortest paths between superpixels via their boundaries. We also incorporate adaptive voting weights and spatial pyramid configurations to include spatial information into the geodesic histogram feature and show that this further improves results. The feature is generic and can be used as part of various algorithms. In experiments, we test the geodesic histogram feature by incorporating it into two existing video segmentation frameworks. This leads to significantly better performance in 3D video segmentation benchmarks on two datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 71,017 |
2009.14786 | Measuring Systematic Generalization in Neural Proof Generation with
Transformers | We are interested in understanding how well Transformer language models (TLMs) can perform reasoning tasks when trained on knowledge encoded in the form of natural language. We investigate their systematic generalization abilities on a logical reasoning task in natural language, which involves reasoning over relationships between entities grounded in first-order logical proofs. Specifically, we perform soft theorem-proving by leveraging TLMs to generate natural language proofs. We test the generated proofs for logical consistency, along with the accuracy of the final inference. We observe length-generalization issues when evaluated on longer-than-trained sequences. However, we observe TLMs improve their generalization performance after being exposed to longer, exhaustive proofs. In addition, we discover that TLMs are able to generalize better using backward-chaining proofs compared to their forward-chaining counterparts, while they find it easier to generate forward chaining proofs. We observe that models that are not trained to generate proofs are better at generalizing to problems based on longer proofs. This suggests that Transformers have efficient internal reasoning strategies that are harder to interpret. These results highlight the systematic generalization behavior of TLMs in the context of logical reasoning, and we believe this work motivates deeper inspection of their underlying reasoning strategies. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 198,139 |
2006.06320 | Hypernetwork-Based Augmentation | Data augmentation is an effective technique to improve the generalization of deep neural networks. Recently, AutoAugment proposed a well-designed search space and a search algorithm that automatically finds augmentation policies in a data-driven manner. However, AutoAugment is computationally intensive. In this paper, we propose an efficient gradient-based search algorithm, called Hypernetwork-Based Augmentation (HBA), which simultaneously learns model parameters and augmentation hyperparameters in a single training. Our HBA uses a hypernetwork to approximate a population-based training algorithm, which enables us to tune augmentation hyperparameters by gradient descent. Besides, we introduce a weight sharing strategy that simplifies our hypernetwork architecture and speeds up our search algorithm. We conduct experiments on CIFAR-10, CIFAR-100, SVHN, and ImageNet. Our results show that HBA is competitive to the state-of-the-art methods in terms of both search speed and accuracy. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 181,383 |
2109.02625 | ERA: Entity Relationship Aware Video Summarization with Wasserstein GAN | Video summarization aims to simplify large scale video browsing by generating concise, short summaries that diver from but well represent the original video. Due to the scarcity of video annotations, recent progress for video summarization concentrates on unsupervised methods, among which the GAN based methods are most prevalent. This type of methods includes a summarizer and a discriminator. The summarized video from the summarizer will be assumed as the final output, only if the video reconstructed from this summary cannot be discriminated from the original one by the discriminator. The primary problems of this GAN based methods are two folds. First, the summarized video in this way is a subset of original video with low redundancy and contains high priority events/entities. This summarization criterion is not enough. Second, the training of the GAN framework is not stable. This paper proposes a novel Entity relationship Aware video summarization method (ERA) to address the above problems. To be more specific, we introduce an Adversarial Spatio Temporal network to construct the relationship among entities, which we think should also be given high priority in the summarization. The GAN training problem is solved by introducing the Wasserstein GAN and two newly proposed video patch/score sum losses. In addition, the score sum loss can also relieve the model sensitivity to the varying video lengths, which is an inherent problem for most current video analysis tasks. Our method substantially lifts the performance on the target benchmark datasets and exceeds the current leaderboard Rank 1 state of the art CSNet (2.1% F1 score increase on TVSum and 3.1% F1 score increase on SumMe). We hope our straightforward yet effective approach will shed some light on the future research of unsupervised video summarization. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 253,807 |
2302.00518 | Probabilistic Search and Track with Multiple Mobile Agents | In this paper we are interested in the task of searching and tracking multiple moving targets in a bounded surveillance area with a group of autonomous mobile agents. More specifically, we assume that targets can appear and disappear at random times inside the surveillance region and their positions are random and unknown. The agents have a limited sensing range, and due to sensor imperfections they receive noisy measurements from the targets. In this work we utilize the theory of random finite sets (RFS) to capture the uncertainty in the time-varying number of targets and their states and we propose a decision and control framework, in which the mode of operation (i.e. search or track) as well as the mobility control action for each agent, at each time instance, are determined so that the collective goal of searching and tracking is achieved. Extensive simulation results demonstrate the effectiveness and performance of the proposed solution. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 343,243 |
2005.10379 | Hierarchical Isometry Properties of Hierarchical Measurements | A new class of measurement operators, coined hierarchical measurement operators, and prove results guaranteeing the efficient, stable and robust recovery of hierarchically structured signals from such measurements. We derive bounds on their hierarchical restricted isometry properties based on the restricted isometry constants of their constituent matrices, generalizing and extending prior work on Kronecker-product measurements. As an exemplary application, we apply the theory to two communication scenarios. The fast and scalable HiHTP algorithm is shown to be suitable for solving these types of problems and its performance is evaluated numerically in terms of sparse signal recovery and block detection capability. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 178,157 |
2309.08375 | Boosting Fair Classifier Generalization through Adaptive Priority
Reweighing | With the increasing penetration of machine learning applications in critical decision-making areas, calls for algorithmic fairness are more prominent. Although there have been various modalities to improve algorithmic fairness through learning with fairness constraints, their performance does not generalize well in the test set. A performance-promising fair algorithm with better generalizability is needed. This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability. Most previous reweighing methods propose to assign a unified weight for each (sub)group. Rather, our method granularly models the distance from the sample predictions to the decision boundary. Our adaptive reweighing method prioritizes samples closer to the decision boundary and assigns a higher weight to improve the generalizability of fair classifiers. Extensive experiments are performed to validate the generalizability of our adaptive priority reweighing method for accuracy and fairness measures (i.e., equal opportunity, equalized odds, and demographic parity) in tabular benchmarks. We also highlight the performance of our method in improving the fairness of language and vision models. The code is available at https://github.com/che2198/APW. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 392,144 |
2104.05544 | Investigating Methods to Improve Language Model Integration for
Attention-based Encoder-Decoder ASR Models | Attention-based encoder-decoder (AED) models learn an implicit internal language model (ILM) from the training transcriptions. The integration with an external LM trained on much more unpaired text usually leads to better performance. A Bayesian interpretation as in the hybrid autoregressive transducer (HAT) suggests dividing by the prior of the discriminative acoustic model, which corresponds to this implicit LM, similarly as in the hybrid hidden Markov model approach. The implicit LM cannot be calculated efficiently in general and it is yet unclear what are the best methods to estimate it. In this work, we compare different approaches from the literature and propose several novel methods to estimate the ILM directly from the AED model. Our proposed methods outperform all previous approaches. We also investigate other methods to suppress the ILM mainly by decreasing the capacity of the AED model, limiting the label context, and also by training the AED model together with a pre-existing LM. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 229,767 |
1404.1518 | Nearly Optimal Minimax Tree Search? | Knuth and Moore presented a theoretical lower bound on the number of leaves that any fixed-depth minimax tree-search algorithm traversing a uniform tree must explore, the so-called minimal tree. Since real-life minimax trees are not uniform, the exact size of this tree is not known for most applications. Further, most games have transpositions, implying that there exists a minimal graph which is smaller than the minimal tree. For three games (chess, Othello and checkers) we compute the size of the minimal tree and the minimal graph. Empirical evidence shows that in all three games, enhanced Alpha-Beta search is capable of building a tree that is close in size to that of the minimal graph. Hence, it appears game-playing programs build nearly optimal search trees. However, the conventional definition of the minimal graph is wrong. There are ways in which the size of the minimal graph can be reduced: by maximizing the number of transpositions in the search, and generating cutoffs using branches that lead to smaller search trees. The conventional definition of the minimal graph is just a left-most approximation. Calculating the size of the real minimal graph is too computationally intensive. However, upper bound approximations show it to be significantly smaller than the left-most minimal graph. Hence, it appears that game-playing programs are not searching as efficiently as is widely believed. Understanding the left-most and real minimal search graphs leads to some new ideas for enhancing Alpha-Beta search. One of them, enhanced transposition cutoffs, is shown to significantly reduce search tree size. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 32,125 |
1907.11484 | Multi-level Domain Adaptive learning for Cross-Domain Detection | In recent years, object detection has shown impressive results using supervised deep learning, but it remains challenging in a cross-domain environment. The variations of illumination, style, scale, and appearance in different domains can seriously affect the performance of detection models. Previous works use adversarial training to align global features across the domain shift and to achieve image information transfer. However, such methods do not effectively match the distribution of local features, resulting in limited improvement in cross-domain object detection. To solve this problem, we propose a multi-level domain adaptive model to simultaneously align the distributions of local-level features and global-level features. We evaluate our method with multiple experiments, including adverse weather adaptation, synthetic data adaptation, and cross camera adaptation. In most object categories, the proposed method achieves superior performance against state-of-the-art techniques, which demonstrates the effectiveness and robustness of our method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 139,861 |
1510.06482 | Triangular Alignment (TAME): A Tensor-based Approach for Higher-order
Network Alignment | Network alignment has extensive applications in comparative interactomics. Traditional approaches aim to simultaneously maximize the number of conserved edges and the underlying similarity of aligned entities. We propose a novel formulation of the network alignment problem that extends topological similarity to higher-order structures and provides a new objective function that maximizes the number of aligned substructures. This objective function corresponds to an integer programming problem, which is NP-hard. Consequently, we identify a closely related surrogate function whose maximization results in a tensor eigenvector problem. Based on this formulation, we present an algorithm called Triangular AlignMEnt (TAME), which attempts to maximize the number of aligned triangles across networks. Using a case study on the NAPAbench dataset, we show that triangular alignment is capable of producing mappings with high node correctness. We further evaluate our method by aligning yeast and human interactomes. Our results indicate that TAME outperforms the state-of-art alignment methods in terms of conserved triangles. In addition, we show that the number of conserved triangles is more significantly correlated, compared to the conserved edge, with node correctness and co-expression of edges. Our formulation and resulting algorithms can be easily extended to arbitrary motifs. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 48,113 |
2501.16373 | Unveiling Discrete Clues: Superior Healthcare Predictions for Rare
Diseases | Accurate healthcare prediction is essential for improving patient outcomes. Existing work primarily leverages advanced frameworks like attention or graph networks to capture the intricate collaborative (CO) signals in electronic health records. However, prediction for rare diseases remains challenging due to limited co-occurrence and inadequately tailored approaches. To address this issue, this paper proposes UDC, a novel method that unveils discrete clues to bridge consistent textual knowledge and CO signals within a unified semantic space, thereby enriching the representation semantics of rare diseases. Specifically, we focus on addressing two key sub-problems: (1) acquiring distinguishable discrete encodings for precise disease representation and (2) achieving semantic alignment between textual knowledge and the CO signals at the code level. For the first sub-problem, we refine the standard vector quantized process to include condition awareness. Additionally, we develop an advanced contrastive approach in the decoding stage, leveraging synthetic and mixed-domain targets as hard negatives to enrich the perceptibility of the reconstructed representation for downstream tasks. For the second sub-problem, we introduce a novel codebook update strategy using co-teacher distillation. This approach facilitates bidirectional supervision between textual knowledge and CO signals, thereby aligning semantically equivalent information in a shared discrete latent space. Extensive experiments on three datasets demonstrate our superiority. | false | true | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 527,938 |
2105.05980 | DONet: Dual-Octave Network for Fast MR Image Reconstruction | Magnetic resonance (MR) image acquisition is an inherently prolonged process, whose acceleration has long been the subject of research. This is commonly achieved by obtaining multiple undersampled images, simultaneously, through parallel imaging. In this paper, we propose the Dual-Octave Network (DONet), which is capable of learning multi-scale spatial-frequency features from both the real and imaginary components of MR data, for fast parallel MR image reconstruction. More specifically, our DONet consists of a series of Dual-Octave convolutions (Dual-OctConv), which are connected in a dense manner for better reuse of features. In each Dual-OctConv, the input feature maps and convolutional kernels are first split into two components (ie, real and imaginary), and then divided into four groups according to their spatial frequencies. Then, our Dual-OctConv conducts intra-group information updating and inter-group information exchange to aggregate the contextual information across different groups. Our framework provides three appealing benefits: (i) It encourages information interaction and fusion between the real and imaginary components at various spatial frequencies to achieve richer representational capacity. (ii) The dense connections between the real and imaginary groups in each Dual-OctConv make the propagation of features more efficient by feature reuse. (iii) DONet enlarges the receptive field by learning multiple spatial-frequency features of both the real and imaginary components. Extensive experiments on two popular datasets (ie, clinical knee and fastMRI), under different undersampling patterns and acceleration factors, demonstrate the superiority of our model in accelerated parallel MR image reconstruction. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 234,975 |
2305.18445 | Intelligent gradient amplification for deep neural networks | Deep learning models offer superior performance compared to other machine learning techniques for a variety of tasks and domains, but pose their own challenges. In particular, deep learning models require larger training times as the depth of a model increases, and suffer from vanishing gradients. Several solutions address these problems independently, but there have been minimal efforts to identify an integrated solution that improves the performance of a model by addressing vanishing gradients, as well as accelerates the training process to achieve higher performance at larger learning rates. In this work, we intelligently determine which layers of a deep learning model to apply gradient amplification to, using a formulated approach that analyzes gradient fluctuations of layers during training. Detailed experiments are performed for simpler and deeper neural networks using two different intelligent measures and two different thresholds that determine the amplification layers, and a training strategy where gradients are amplified only during certain epochs. Results show that our amplification offers better performance compared to the original models, and achieves accuracy improvement of around 2.5% on CIFAR- 10 and around 4.5% on CIFAR-100 datasets, even when the models are trained with higher learning rates. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 369,061 |
2209.12602 | Effects of language mismatch in automatic forensic voice comparison
using deep learning embeddings | In forensic voice comparison the speaker embedding has become widely popular in the last 10 years. Most of the pretrained speaker embeddings are trained on English corpora, because it is easily accessible. Thus, language dependency can be an important factor in automatic forensic voice comparison, especially when the target language is linguistically very different. There are numerous commercial systems available, but their models are mainly trained on a different language (mostly English) than the target language. In the case of a low-resource language, developing a corpus for forensic purposes containing enough speakers to train deep learning models is costly. This study aims to investigate whether a model pre-trained on English corpus can be used on a target low-resource language (here, Hungarian), different from the model is trained on. Also, often multiple samples are not available from the offender (unknown speaker). Therefore, samples are compared pairwise with and without speaker enrollment for suspect (known) speakers. Two corpora are applied that were developed especially for forensic purposes, and a third that is meant for traditional speaker verification. Two deep learning based speaker embedding vector extraction methods are used: the x-vector and ECAPA-TDNN. Speaker verification was evaluated in the likelihood-ratio framework. A comparison is made between the language combinations (modeling, LR calibration, evaluation). The results were evaluated by minCllr and EER metrics. It was found that the model pre-trained on a different language but on a corpus with a huge amount of speakers performs well on samples with language mismatch. The effect of sample durations and speaking styles were also examined. It was found that the longer the duration of the sample in question the better the performance is. Also, there is no real difference if various speaking styles are applied. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 319,586 |
2307.03968 | Multi-Level Power Series Solution for Large Surface and Volume Electric
Field Integral Equation | In this paper, we propose a new multilevel power series solution method for solving a large surface and volume electric field integral equation based H-Matrix. The proposed solution method converges in a fixed number of iterations and is solved at each level of the H-Matrix computation.The solution method avoids the computation of a full matrix, as it can be solved independently at each level, starting from the leaf level. Solution at each level can be used as the final solution, thus saving the matrix computation time for full H-Matrix. The paper shows that the leaf level matrix computation and solution with power series gives an accurate results as full H-Matrix iterative solver method. The method results in considerable time and memory savings compared to the H-Matrix iterative solver. Further, the proposed method retains the O(NlogN) solution complexity | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 378,226 |
1412.6988 | Universal test for Hippocratic randomness | Hippocratic randomness is defined in a similar way to Martin-Lof randomness, however it does not assume computability of the probability and the existence of universal test is not assured. We introduce the notion of approximation of probability and show the existence of the universal test (Levin-Schnorr theorem) for Hippocratic randomness when the logarithm of the probability is approximated within additive constant. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 38,744 |
1805.11653 | LSTMs Exploit Linguistic Attributes of Data | While recurrent neural networks have found success in a variety of natural language processing applications, they are general models of sequential data. We investigate how the properties of natural language data affect an LSTM's ability to learn a nonlinguistic task: recalling elements from its input. We find that models trained on natural language data are able to recall tokens from much longer sequences than models trained on non-language sequential data. Furthermore, we show that the LSTM learns to solve the memorization task by explicitly using a subset of its neurons to count timesteps in the input. We hypothesize that the patterns and structure in natural language data enable LSTMs to learn by providing approximate ways of reducing loss, but understanding the effect of different training data on the learnability of LSTMs remains an open question. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 98,971 |
2306.16707 | DiffusionSTR: Diffusion Model for Scene Text Recognition | This paper presents Diffusion Model for Scene Text Recognition (DiffusionSTR), an end-to-end text recognition framework using diffusion models for recognizing text in the wild. While existing studies have viewed the scene text recognition task as an image-to-text transformation, we rethought it as a text-text one under images in a diffusion model. We show for the first time that the diffusion model can be applied to text recognition. Furthermore, experimental results on publicly available datasets show that the proposed method achieves competitive accuracy compared to state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 376,460 |
2008.04640 | High-concurrency Custom-build Relational Database System's design and
SQL parser design based on Turing-complete automata | Database system is an indispensable part of software projects. It plays an important role in data organization and storage. Its performance and efficiency are directly related to the performance of software. Nowadays, we have many general relational database systems that can be used in our projects, such as SQL Server, MySQL, Oracle, etc. It is undeniable that in most cases, we can easily use these database systems to complete our projects, but considering the generality, the general database systems often can't play the ultimate speed and fully adapt to our projects. In very few projects, we will need to design a database system that fully adapt to our projects and have a high efficiency and concurrency. Therefore, it is very important to consider a feasible solution of designing a database system (We only consider the relational database system here). Meanwhile, for a database system, SQL interpretation and execution module is necessary. According to the theory of formal language and automata, the realization of this module can be completed by automata. In our experiment, we made the following contributions: 1) We designed a small relational database, and used the database to complete a highly concurrent student course selection system. 2) We design a general automaton module, which can complete the operation from parsing to execution. The using of strategy model and event driven design scheme is used and some improvement on general automata, for example a memory like structure is added to automata to make it better to store context. All these make the automata model can be used in a variety of occasions, not only the parsing and execution of SQL statements. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 191,285 |
2309.06388 | Computational Approaches for Predicting Drug-Disease Associations: A
Comprehensive Review | In recent decades, traditional drug research and development have been facing challenges such as high cost, long timelines, and high risks. To address these issues, many computational approaches have been suggested for predicting the relationship between drugs and diseases through drug repositioning, aiming to reduce the cost, development cycle, and risks associated with developing new drugs. Researchers have explored different computational methods to predict drug-disease associations, including drug side effects-disease associations, drug-target associations, and miRNAdisease associations. In this comprehensive review, we focus on recent advances in predicting drug-disease association methods for drug repositioning. We first categorize these methods into several groups, including neural network-based algorithms, matrixbased algorithms, recommendation algorithms, link-based reasoning algorithms, and text mining and semantic reasoning. Then, we compare the prediction performance of existing drug-disease association prediction algorithms. Lastly, we delve into the present challenges and future prospects concerning drug-disease associations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 391,407 |
1701.07488 | Joint Power Allocation and Beamforming for Energy-Efficient Two-Way
Multi-Relay Communications | This paper considers the joint design of user power allocation and relay beamforming in relaying communications, in which multiple pairs of single-antenna users exchange information with each other via multiple-antenna relays in two time slots. All users transmit their signals to the relays in the first time slot while the relays broadcast the beamformed signals to all users in the second time slot. The aim is to maximize the system's energy efficiency (EE) subject to quality-of-service (QoS) constraints in terms of exchange throughput requirements. The QoS constraints are nonconvex with many nonlinear cross-terms, so finding a feasible point is already computationally challenging. The sum throughput appears in the numerator while the total consumption power appears in the denominator of the EE objective function. The former is a nonconcave function and the latter is a nonconvex function, making fractional programming useless for EE optimization. Nevertheless, efficient iterations of low complexity to obtain its optimized solutions are developed. The performances of the multiple-user and multiple-relay networks under various scenarios are evaluated to show the merit of the paper development. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 67,300 |
2401.02991 | GLIDE-RL: Grounded Language Instruction through DEmonstration in RL | One of the final frontiers in the development of complex human - AI collaborative systems is the ability of AI agents to comprehend the natural language and perform tasks accordingly. However, training efficient Reinforcement Learning (RL) agents grounded in natural language has been a long-standing challenge due to the complexity and ambiguity of the language and sparsity of the rewards, among other factors. Several advances in reinforcement learning, curriculum learning, continual learning, language models have independently contributed to effective training of grounded agents in various environments. Leveraging these developments, we present a novel algorithm, Grounded Language Instruction through DEmonstration in RL (GLIDE-RL) that introduces a teacher-instructor-student curriculum learning framework for training an RL agent capable of following natural language instructions that can generalize to previously unseen language instructions. In this multi-agent framework, the teacher and the student agents learn simultaneously based on the student's current skill level. We further demonstrate the necessity for training the student agent with not just one, but multiple teacher agents. Experiments on a complex sparse reward environment validates the effectiveness of our proposed approach. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 419,924 |
2305.00813 | Neurosymbolic AI -- Why, What, and How | Humans interact with the environment using a combination of perception - transforming sensory inputs from their environment into symbols, and cognition - mapping symbols to knowledge about the environment for supporting abstraction, reasoning by analogy, and long-term planning. Human perception-inspired machine perception, in the context of AI, refers to large-scale pattern recognition from raw data using neural networks trained using self-supervised learning objectives such as next-word prediction or object recognition. On the other hand, machine cognition encompasses more complex computations, such as using knowledge of the environment to guide reasoning, analogy, and long-term planning. Humans can also control and explain their cognitive functions. This seems to require the retention of symbolic mappings from perception outputs to knowledge about their environment. For example, humans can follow and explain the guidelines and safety constraints driving their decision-making in safety-critical applications such as healthcare, criminal justice, and autonomous driving. This article introduces the rapidly emerging paradigm of Neurosymbolic AI combines neural networks and knowledge-guided symbolic approaches to create more capable and flexible AI systems. These systems have immense potential to advance both algorithm-level (e.g., abstraction, analogy, reasoning) and application-level (e.g., explainable and safety-constrained decision-making) capabilities of AI systems. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 361,462 |
2407.18716 | ChatSchema: A pipeline of extracting structured information with Large
Multimodal Models based on schema | Objective: This study introduces ChatSchema, an effective method for extracting and structuring information from unstructured data in medical paper reports using a combination of Large Multimodal Models (LMMs) and Optical Character Recognition (OCR) based on the schema. By integrating predefined schema, we intend to enable LMMs to directly extract and standardize information according to the schema specifications, facilitating further data entry. Method: Our approach involves a two-stage process, including classification and extraction for categorizing report scenarios and structuring information. We established and annotated a dataset to verify the effectiveness of ChatSchema, and evaluated key extraction using precision, recall, F1-score, and accuracy metrics. Based on key extraction, we further assessed value extraction. We conducted ablation studies on two LMMs to illustrate the improvement of structured information extraction with different input modals and methods. Result: We analyzed 100 medical reports from Peking University First Hospital and established a ground truth dataset with 2,945 key-value pairs. We evaluated ChatSchema using GPT-4o and Gemini 1.5 Pro and found a higher overall performance of GPT-4o. The results are as follows: For the result of key extraction, key-precision was 98.6%, key-recall was 98.5%, key-F1-score was 98.6%. For the result of value extraction based on correct key extraction, the overall accuracy was 97.2%, precision was 95.8%, recall was 95.8%, and F1-score was 95.8%. An ablation study demonstrated that ChatSchema achieved significantly higher overall accuracy and overall F1-score of key-value extraction, compared to the Baseline, with increases of 26.9% overall accuracy and 27.4% overall F1-score, respectively. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 476,492 |
2412.05625 | Can Large Language Models Help Developers with Robotic Finite State
Machine Modification? | Finite state machines (FSMs) are widely used to manage robot behavior logic, particularly in real-world applications that require a high degree of reliability and structure. However, traditional manual FSM design and modification processes can be time-consuming and error-prone. We propose that large language models (LLMs) can assist developers in editing FSM code for real-world robotic use cases. LLMs, with their ability to use context and process natural language, offer a solution for FSM modification with high correctness, allowing developers to update complex control logic through natural language instructions. Our approach leverages few-shot prompting and language-guided code generation to reduce the amount of time it takes to edit an FSM. To validate this approach, we evaluate it on a real-world robotics dataset, demonstrating its effectiveness in practical scenarios. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 514,906 |
2401.08047 | Incremental Extractive Opinion Summarization Using Cover Trees | Extractive opinion summarization involves automatically producing a summary of text about an entity (e.g., a product's reviews) by extracting representative sentences that capture prevalent opinions in the review set. Typically, in online marketplaces user reviews accumulate over time, and opinion summaries need to be updated periodically to provide customers with up-to-date information. In this work, we study the task of extractive opinion summarization in an incremental setting, where the underlying review set evolves over time. Many of the state-of-the-art extractive opinion summarization approaches are centrality-based, such as CentroidRank (Radev et al., 2004; Chowdhury et al., 2022). CentroidRank performs extractive summarization by selecting a subset of review sentences closest to the centroid in the representation space as the summary. However, these methods are not capable of operating efficiently in an incremental setting, where reviews arrive one at a time. In this paper, we present an efficient algorithm for accurately computing the CentroidRank summaries in an incremental setting. Our approach, CoverSumm, relies on indexing review representations in a cover tree and maintaining a reservoir of candidate summary review sentences. CoverSumm's efficacy is supported by a theoretical and empirical analysis of running time. Empirically, on a diverse collection of data (both real and synthetically created to illustrate scaling considerations), we demonstrate that CoverSumm is up to 36x faster than baseline methods, and capable of adapting to nuanced changes in data distribution. We also conduct human evaluations of the generated summaries and find that CoverSumm is capable of producing informative summaries consistent with the underlying review set. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 421,743 |
2201.05752 | Moses: Efficient Exploitation of Cross-device Transferable Features for
Tensor Program Optimization | Achieving efficient execution of machine learning models has attracted significant attention recently. To generate tensor programs efficiently, a key component of DNN compilers is the cost model that can predict the performance of each configuration on specific devices. However, due to the rapid emergence of hardware platforms, it is increasingly labor-intensive to train domain-specific predictors for every new platform. Besides, current design of cost models cannot provide transferable features between different hardware accelerators efficiently and effectively. In this paper, we propose Moses, a simple and efficient design based on the lottery ticket hypothesis, which fully takes advantage of the features transferable to the target device via domain adaptation. Compared with state-of-the-art approaches, Moses achieves up to 1.53X efficiency gain in the search stage and 1.41X inference speedup on challenging DNN benchmarks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 275,478 |
2101.02244 | User Ex Machina : Simulation as a Design Probe in Human-in-the-Loop Text
Analytics | Topic models are widely used analysis techniques for clustering documents and surfacing thematic elements of text corpora. These models remain challenging to optimize and often require a "human-in-the-loop" approach where domain experts use their knowledge to steer and adjust. However, the fragility, incompleteness, and opacity of these models means even minor changes could induce large and potentially undesirable changes in resulting model. In this paper we conduct a simulation-based analysis of human-centered interactions with topic models, with the objective of measuring the sensitivity of topic models to common classes of user actions. We find that user interactions have impacts that differ in magnitude but often negatively affect the quality of the resulting modelling in a way that can be difficult for the user to evaluate. We suggest the incorporation of sensitivity and "multiverse" analyses to topic model interfaces to surface and overcome these deficiencies. | true | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 214,560 |
2107.06870 | Reinforced Hybrid Genetic Algorithm for the Traveling Salesman Problem | In this paper, we propose a new method called the Reinforced Hybrid Genetic Algorithm (RHGA) for solving the famous NP-hard Traveling Salesman Problem (TSP). Specifically, we combine reinforcement learning with the well-known Edge Assembly Crossover genetic algorithm (EAX-GA) and the Lin-Kernighan-Helsgaun (LKH) local search heuristic. In the hybrid algorithm, LKH can help EAX-GA improve the population by its effective local search, and EAX-GA can help LKH escape from local optima by providing high-quality and diverse initial solutions. We restrict that there is only one special individual among the population in EAX-GA that can be improved by LKH. Such a mechanism can prevent the population diversity, efficiency, and algorithm performance from declining due to the redundant calling of LKH upon the population. As a result, our proposed hybrid mechanism can help EAX-GA and LKH boost each other's performance without reducing the convergence rate of the population. The reinforcement learning technique based on Q-learning further promotes the hybrid genetic algorithm. Experimental results on 138 well-known and widely used TSP benchmarks with the number of cities ranging from 1,000 to 85,900 demonstrate the excellent performance of RHGA. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | 246,222 |
2311.14997 | Every latin hypercube of order 5 has transversals | We prove that for all n>1 every latin n-dimensional cube of order 5 has transversals. We find all 123 paratopy classes of layer-latin cubes of order 5 with no transversals. For each $n\geq 3$ and $q\geq 3$ we construct a (2q-2)-layer latin n-dimensional cuboid with no transversals. Moreover, we find all paratopy classes of nonextendible and noncompletable latin cuboids of order 5. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 410,343 |
2006.07695 | Learning Sparse Graphons and the Generalized Kesten-Stigum Threshold | The problem of learning graphons has attracted considerable attention across several scientific communities, with significant progress over the recent years in sparser regimes. Yet, the current techniques still require diverging degrees in order to succeed with efficient algorithms in the challenging cases where the local structure of the graph is homogeneous. This paper provides an efficient algorithm to learn graphons in the constant expected degree regime. The algorithm is shown to succeed in estimating the rank-$k$ projection of a graphon in the $L_2$ metric if the top $k$ eigenvalues of the graphon satisfy a generalized Kesten-Stigum condition. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 181,902 |
2302.13796 | Fast Trajectory End-Point Prediction with Event Cameras for Reactive
Robot Control | Prediction skills can be crucial for the success of tasks where robots have limited time to act or joints actuation power. In such a scenario, a vision system with a fixed, possibly too low, sampling rate could lead to the loss of informative points, slowing down prediction convergence and reducing the accuracy. In this paper, we propose to exploit the low latency, motion-driven sampling, and data compression properties of event cameras to overcome these issues. As a use-case, we use a Panda robotic arm to intercept a ball bouncing on a table. To predict the interception point, we adopt a Stateful LSTM network, a specific LSTM variant without fixed input length, which perfectly suits the event-driven paradigm and the problem at hand, where the length of the trajectory is not defined. We train the network in simulation to speed up the dataset acquisition and then fine-tune the models on real trajectories. Experimental results demonstrate how using a dense spatial sampling (i.e. event cameras) significantly increases the number of intercepted trajectories as compared to a fixed temporal sampling (i.e. frame-based cameras). | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 348,058 |
1803.02578 | Generating Goal-Directed Visuomotor Plans Based on Learning Using a
Predictive Coding-type Deep Visuomotor Recurrent Neural Network Model | The current paper presents how a predictive coding type deep recurrent neural networks can generate vision-based goal-directed plans based on prior learning experience by examining experiment results using a real arm robot. The proposed deep recurrent neural network learns to predict visuo-proprioceptive sequences by extracting an adequate predictive model from various visuomotor experiences related to object-directed behaviors. The predictive model was developed in terms of mapping from intention state space to expected visuo-proprioceptive sequences space through iterative learning. Our arm robot experiments adopted with three different tasks with different levels of difficulty showed that the error minimization principle in the predictive coding framework applied to inference of the optimal intention states for given goal states can generate goal-directed plans even for unlearned goal states with generalization. It was, however, shown that sufficient generalization requires relatively large number of learning trajectories. The paper discusses possible countermeasure to overcome this problem. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 92,091 |
2409.07914 | InterACT: Inter-dependency Aware Action Chunking with Hierarchical
Attention Transformers for Bimanual Manipulation | Bimanual manipulation presents unique challenges compared to unimanual tasks due to the complexity of coordinating two robotic arms. In this paper, we introduce InterACT: Inter-dependency aware Action Chunking with Hierarchical Attention Transformers, a novel imitation learning framework designed specifically for bimanual manipulation. InterACT leverages hierarchical attention mechanisms to effectively capture inter-dependencies between dual-arm joint states and visual inputs. The framework comprises a Hierarchical Attention Encoder, which processes multi-modal inputs through segment-wise and cross-segment attention mechanisms, and a Multi-arm Decoder that generates each arm's action predictions in parallel, while sharing information between the arms through synchronization blocks by providing the other arm's intermediate output as context. Our experiments, conducted on various simulated and real-world bimanual manipulation tasks, demonstrate that InterACT outperforms existing methods. Detailed ablation studies further validate the significance of key components, including the impact of CLS tokens, cross-segment encoders, and synchronization blocks on task performance. We provide supplementary materials and videos on our project page. | false | false | false | false | true | false | false | true | false | false | false | true | false | false | false | false | false | false | 487,707 |
2212.07719 | Time-limited Balanced Truncation for Data Assimilation Problems | Balanced truncation is a well-established model order reduction method which has been applied to a variety of problems. Recently, a connection between linear Gaussian Bayesian inference problems and the system-theoretic concept of balanced truncation has been drawn. Although this connection is new, the application of balanced truncation to data assimilation is not a novel idea: it has already been used in four-dimensional variational data assimilation (4D-Var). This paper discusses the application of balanced truncation to linear Gaussian Bayesian inference, and, in particular, the 4D-Var method, thereby strengthening the link between systems theory and data assimilation further. Similarities between both types of data assimilation problems enable a generalisation of the state-of-the-art approach to the use of arbitrary prior covariances as reachability Gramians. Furthermore, we propose an enhanced approach using time-limited balanced truncation that allows to balance Bayesian inference for unstable systems and in addition improves the numerical results for short observation periods. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 336,497 |
1903.10693 | Decomposing information into copying versus transformation | In many real-world systems, information can be transmitted in two qualitatively different ways: by copying or by transformation. Copying occurs when messages are transmitted without modification, e.g., when an offspring receives an unaltered copy of a gene from its parent. Transformation occurs when messages are modified systematically during transmission, e.g., when mutational biases occur during genetic replication. Standard information-theoretic measures do not distinguish these two modes of information transfer, although they may reflect different mechanisms and have different functional consequences. Starting from a few simple axioms, we derive a decomposition of mutual information into the information transmitted by copying versus the information transmitted by transformation. We begin with a decomposition that applies when the source and destination of the channel have the same set of messages and a notion of message identity exists. We then generalize our decomposition to other kinds of channels, which can involve different source and destination sets and broader notions of similarity. In addition, we show that copy information can be interpreted as the minimal work needed by a physical copying process, which is relevant for understanding the physics of replication. We use the proposed decomposition to explore a model of amino acid substitution rates. Our results apply to any system in which the fidelity of copying, rather than simple predictability, is of critical relevance. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 125,342 |
2212.00187 | Five Properties of Specific Curiosity You Didn't Know Curious Machines
Should Have | Curiosity for machine agents has been a focus of lively research activity. The study of human and animal curiosity, particularly specific curiosity, has unearthed several properties that would offer important benefits for machine learners, but that have not yet been well-explored in machine intelligence. In this work, we conduct a comprehensive, multidisciplinary survey of the field of animal and machine curiosity. As a principal contribution of this work, we use this survey as a foundation to introduce and define what we consider to be five of the most important properties of specific curiosity: 1) directedness towards inostensible referents, 2) cessation when satisfied, 3) voluntary exposure, 4) transience, and 5) coherent long-term learning. As a second main contribution of this work, we show how these properties may be implemented together in a proof-of-concept reinforcement learning agent: we demonstrate how the properties manifest in the behaviour of this agent in a simple non-episodic grid-world environment that includes curiosity-inducing locations and induced targets of curiosity. As we would hope, our example of a computational specific curiosity agent exhibits short-term directed behaviour while updating long-term preferences to adaptively seek out curiosity-inducing situations. This work, therefore, presents a landmark synthesis and translation of specific curiosity to the domain of machine learning and reinforcement learning and provides a novel view into how specific curiosity operates and in the future might be integrated into the behaviour of goal-seeking, decision-making computational agents in complex environments. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 333,964 |
2310.08697 | The Data Lakehouse: Data Warehousing and More | Relational Database Management Systems designed for Online Analytical Processing (RDBMS-OLAP) have been foundational to democratizing data and enabling analytical use cases such as business intelligence and reporting for many years. However, RDBMS-OLAP systems present some well-known challenges. They are primarily optimized only for relational workloads, lead to proliferation of data copies which can become unmanageable, and since the data is stored in proprietary formats, it can lead to vendor lock-in, restricting access to engines, tools, and capabilities beyond what the vendor offers. As the demand for data-driven decision making surges, the need for a more robust data architecture to address these challenges becomes ever more critical. Cloud data lakes have addressed some of the shortcomings of RDBMS-OLAP systems, but they present their own set of challenges. More recently, organizations have often followed a two-tier architectural approach to take advantage of both these platforms, leveraging both cloud data lakes and RDBMS-OLAP systems. However, this approach brings additional challenges, complexities, and overhead. This paper discusses how a data lakehouse, a new architectural approach, achieves the same benefits of an RDBMS-OLAP and cloud data lake combined, while also providing additional advantages. We take today's data warehousing and break it down into implementation independent components, capabilities, and practices. We then take these aspects and show how a lakehouse architecture satisfies them. Then, we go a step further and discuss what additional capabilities and benefits a lakehouse architecture provides over an RDBMS-OLAP. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 399,489 |
1508.00784 | Are You Really Hidden? Predicting Current City from Profile and Social
Relationship | Privacy has become a major concern in Online Social Networks (OSNs) due to threats such as advertising spam, online stalking and identity theft. Although many users hide or do not fill out their private attributes in OSNs, prior studies point out that the hidden attributes may be inferred from some other public information. Thus, users' private information could still be at stake to be exposed. Hitherto, little work helps users to assess the exposure probability/risk that the hidden attributes can be correctly predicted, let alone provides them with pointed countermeasures. In this article, we focus our study on the exposure risk assessment by a particular privacy-sensitive attribute - current city - in Facebook. Specifically, we first design a novel current city prediction approach that discloses users' hidden `current city' from their self-exposed information. Based on 371,913 Facebook users' data, we verify that our proposed prediction approach can predict users' current city more accurately than state-of-the-art approaches. Furthermore, we inspect the prediction results and model the current city exposure probability via some measurable characteristics of the self-exposed information. Finally, we construct an exposure estimator to assess the current city exposure risk for individual users, given their self-exposed information. Several case studies are presented to illustrate how to use our proposed estimator for privacy protection. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 45,712 |
1606.01077 | A Fuzzy Approach to Qualification in Design Exploration for Autonomous
Robots and Systems | Autonomous robots must operate in complex and changing environments subject to requirements on their behaviour. Verifying absolute satisfaction (true or false) of these requirements is challenging. Instead, we analyse requirements that admit flexible degrees of satisfaction. We analyse vague requirements using fuzzy logic, and probabilistic requirements using model checking. The resulting analysis method provides a partial ordering of system designs, identifying trade-offs between different requirements in terms of the degrees to which they are satisfied. A case study involving a home care robot interacting with a human is used to demonstrate the approach. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 56,747 |
2310.14194 | Distractor-aware Event-based Tracking | Event cameras, or dynamic vision sensors, have recently achieved success from fundamental vision tasks to high-level vision researches. Due to its ability to asynchronously capture light intensity changes, event camera has an inherent advantage to capture moving objects in challenging scenarios including objects under low light, high dynamic range, or fast moving objects. Thus event camera are natural for visual object tracking. However, the current event-based trackers derived from RGB trackers simply modify the input images to event frames and still follow conventional tracking pipeline that mainly focus on object texture for target distinction. As a result, the trackers may not be robust dealing with challenging scenarios such as moving cameras and cluttered foreground. In this paper, we propose a distractor-aware event-based tracker that introduces transformer modules into Siamese network architecture (named DANet). Specifically, our model is mainly composed of a motion-aware network and a target-aware network, which simultaneously exploits both motion cues and object contours from event data, so as to discover motion objects and identify the target object by removing dynamic distractors. Our DANet can be trained in an end-to-end manner without any post-processing and can run at over 80 FPS on a single V100. We conduct comprehensive experiments on two large event tracking datasets to validate the proposed model. We demonstrate that our tracker has superior performance against the state-of-the-art trackers in terms of both accuracy and efficiency. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 401,752 |
2209.04448 | Learning sparse auto-encoders for green AI image coding | Recently, convolutional auto-encoders (CAE) were introduced for image coding. They achieved performance improvements over the state-of-the-art JPEG2000 method. However, these performances were obtained using massive CAEs featuring a large number of parameters and whose training required heavy computational power.\\ In this paper, we address the problem of lossy image compression using a CAE with a small memory footprint and low computational power usage. In order to overcome the computational cost issue, the majority of the literature uses Lagrangian proximal regularization methods, which are time consuming themselves.\\ In this work, we propose a constrained approach and a new structured sparse learning method. We design an algorithm and test it on three constraints: the classical $\ell_1$ constraint, the $\ell_{1,\infty}$ and the new $\ell_{1,1}$ constraint. Experimental results show that the $\ell_{1,1}$ constraint provides the best structured sparsity, resulting in a high reduction of memory and computational cost, with similar rate-distortion performance as with dense networks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 316,790 |
1810.09304 | On the k-Boundedness for Existential Rules | The chase is a fundamental tool for existential rules. Several chase variants are known, which differ on how they handle redundancies possibly caused by the introduction of nulls. Given a chase variant, the halting problem takes as input a set of existential rules and asks if this set of rules ensures the termination of the chase for any factbase. It is well-known that this problem is undecidable for all known chase variants. The related problem of boundedness asks if a given set of existential rules is bounded, i.e., whether there is a predefined upper bound on the number of (breadth-first) steps of the chase, independently from any factbase. This problem is already undecidable in the specific case of datalog rules. However, knowing that a set of rules is bounded for some chase variant does not help much in practice if the bound is unknown. Hence, in this paper, we investigate the decidability of the k-boundedness problem, which asks whether a given set of rules is bounded by an integer k. We prove that k-boundedness is decidable for three chase variants, namely the oblivious, semi-oblivious and restricted chase. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | false | 111,030 |
1809.09802 | Robust Shape Estimation for 3D Deformable Object Manipulation | Existing shape estimation methods for deformable object manipulation suffer from the drawbacks of being off-line, model dependent, noise-sensitive or occlusion-sensitive, and thus are not appropriate for manipulation tasks requiring high precision. In this paper, we present a real-time shape estimation approach for autonomous robotic manipulation of 3D deformable objects. Our method fulfills all the requirements necessary for the high-quality deformable object manipulation in terms of being real-time, model-free and robust to noise and occlusion. These advantages are accomplished using a joint tracking and reconstruction framework, in which we track the object deformation by aligning a reference shape model with the stream input from the RGB-D camera, and simultaneously upgrade the reference shape model according to the newly captured RGB-D data. We have evaluated the quality and robustness of our real-time shape estimation pipeline on a set of deformable manipulation tasks implemented on physical robots. Videos are available at https://lifeisfantastic.github.io/DeformShapeEst/ | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 108,775 |
1309.5821 | Undefined By Data: A Survey of Big Data Definitions | The term big data has become ubiquitous. Owing to a shared origin between academia, industry and the media there is no single unified definition, and various stakeholders provide diverse and often contradictory definitions. The lack of a consistent definition introduces ambiguity and hampers discourse relating to big data. This short paper attempts to collate the various definitions which have gained some degree of traction and to furnish a clear and concise definition of an otherwise ambiguous term. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 27,197 |
2410.15521 | Lying mirror | We introduce an all-optical system, termed the "lying mirror", to hide input information by transforming it into misleading, ordinary-looking patterns that effectively camouflage the underlying image data and deceive the observers. This misleading transformation is achieved through passive light-matter interactions of the incident light with an optimized structured diffractive surface, enabling the optical concealment of any form of secret input data without any digital computing. These lying mirror designs were shown to camouflage different types of input image data, exhibiting robustness against a range of adversarial manipulations, including random image noise as well as unknown, random rotations, shifts, and scaling of the object features. The feasibility of the lying mirror concept was also validated experimentally using a structured micro-mirror array along with multi-wavelength illumination at 480, 550 and 600 nm, covering the blue, green and red image channels. This framework showcases the power of structured diffractive surfaces for visual information processing and might find various applications in defense, security and entertainment. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 500,585 |
2404.16557 | Energy-Latency Manipulation of Multi-modal Large Language Models via
Verbose Samples | Despite the exceptional performance of multi-modal large language models (MLLMs), their deployment requires substantial computational resources. Once malicious users induce high energy consumption and latency time (energy-latency cost), it will exhaust computational resources and harm availability of service. In this paper, we investigate this vulnerability for MLLMs, particularly image-based and video-based ones, and aim to induce high energy-latency cost during inference by crafting an imperceptible perturbation. We find that high energy-latency cost can be manipulated by maximizing the length of generated sequences, which motivates us to propose verbose samples, including verbose images and videos. Concretely, two modality non-specific losses are proposed, including a loss to delay end-of-sequence (EOS) token and an uncertainty loss to increase the uncertainty over each generated token. In addition, improving diversity is important to encourage longer responses by increasing the complexity, which inspires the following modality specific loss. For verbose images, a token diversity loss is proposed to promote diverse hidden states. For verbose videos, a frame feature diversity loss is proposed to increase the feature diversity among frames. To balance these losses, we propose a temporal weight adjustment algorithm. Experiments demonstrate that our verbose samples can largely extend the length of generated sequences. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 449,542 |
2005.07344 | Resisting Crowd Occlusion and Hard Negatives for Pedestrian Detection in
the Wild | Pedestrian detection has been heavily studied in the last decade due to its wide application. Despite incremental progress, crowd occlusion and hard negatives are still challenging current state-of-the-art pedestrian detectors. In this paper, we offer two approaches based on the general region-based detection framework to tackle these challenges. Specifically, to address the occlusion, we design a novel coulomb loss as a regulator on bounding box regression, in which proposals are attracted by their target instance and repelled by the adjacent non-target instances. For hard negatives, we propose an efficient semantic-driven strategy for selecting anchor locations, which can sample informative negative examples at training phase for classification refinement. It is worth noting that these methods can also be applied to general object detection domain, and trainable in an end-to-end manner. We achieves consistently high performance on the Caltech-USA and CityPersons benchmarks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 177,254 |
1909.03118 | Trading-Off Static and Dynamic Regret in Online Least-Squares and Beyond | Recursive least-squares algorithms often use forgetting factors as a heuristic to adapt to non-stationary data streams. The first contribution of this paper rigorously characterizes the effect of forgetting factors for a class of online Newton algorithms. For exp-concave and strongly convex objectives, the algorithms achieve the dynamic regret of $\max\{O(\log T),O(\sqrt{TV})\}$, where $V$ is a bound on the path length of the comparison sequence. In particular, we show how classic recursive least-squares with a forgetting factor achieves this dynamic regret bound. By varying $V$, we obtain a trade-off between static and dynamic regret. In order to obtain more computationally efficient algorithms, our second contribution is a novel gradient descent step size rule for strongly convex functions. Our gradient descent rule recovers the order optimal dynamic regret bounds described above. For smooth problems, we can also obtain static regret of $O(T^{1-\beta})$ and dynamic regret of $O(T^\beta V^*)$, where $\beta \in (0,1)$ and $V^*$ is the path length of the sequence of minimizers. By varying $\beta$, we obtain a trade-off between static and dynamic regret. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 144,378 |
2205.02911 | A Driver-Vehicle Model for ADS Scenario-based Testing | Scenario-based testing for automated driving systems (ADS) must be able to simulate traffic scenarios that rely on interactions with other vehicles. Although many languages for high-level scenario modelling have been proposed, they lack the features to precisely and reliably control the required micro-simulation, while also supporting behavior reuse and test reproducibility for a wide range of interactive scenarios. To fill this gap between scenario design and execution, we propose the Simulated Driver-Vehicle (SDV) model to represent and simulate vehicles as dynamic entities with their behavior being constrained by scenario design and goals set by testers. The model combines driver and vehicle as a single entity. It is based on human-like driving and the mechanical limitations of real vehicles for realistic simulation. The model leverages behavior trees to express high-level behaviors in terms of lower-level maneuvers, affording multiple driving styles and reuse. Furthermore, optimization-based maneuver planners guide the simulated vehicles towards the desired behavior. Our extensive evaluation shows the model's design effectiveness using NHTSA pre-crash scenarios, its motion realism in comparison to naturalistic urban traffic, and its scalability with traffic density. Finally, we show the applicability of our SDV model to test a real ADS and to identify crash scenarios, which are impractical to represent using predefined vehicle trajectories. The SDV model instances can be injected into existing simulation environments via co-simulation. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 295,103 |
1712.08084 | AVEID: Automatic Video System for Measuring Engagement In Dementia | Engagement in dementia is typically measured using behavior observational scales (BOS) that are tedious and involve intensive manual labor to annotate, and are therefore not easily scalable. We propose AVEID, a low cost and easy-to-use video-based engagement measurement tool to determine the engagement level of a person with dementia (PwD) during digital interaction. We show that the objective behavioral measures computed via AVEID correlate well with subjective expert impressions for the popular MPES and OME BOS, confirming its viability and effectiveness. Moreover, AVEID measures can be obtained for a variety of engagement designs, thereby facilitating large-scale studies with PwD populations. | true | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 87,132 |
2203.15629 | Stochastic Conservative Contextual Linear Bandits | Many physical systems have underlying safety considerations that require that the strategy deployed ensures the satisfaction of a set of constraints. Further, often we have only partial information on the state of the system. We study the problem of safe real-time decision making under uncertainty. In this paper, we formulate a conservative stochastic contextual bandit formulation for real-time decision making when an adversary chooses a distribution on the set of possible contexts and the learner is subject to certain safety/performance constraints. The learner observes only the context distribution and the exact context is unknown, and the goal is to develop an algorithm that selects a sequence of optimal actions to maximize the cumulative reward without violating the safety constraints at any time step. By leveraging the UCB algorithm for this setting, we propose a conservative linear UCB algorithm for stochastic bandits with context distribution. We prove an upper bound on the regret of the algorithm and show that it can be decomposed into three terms: (i) an upper bound for the regret of the standard linear UCB algorithm, (ii) a constant term (independent of time horizon) that accounts for the loss of being conservative in order to satisfy the safety constraint, and (ii) a constant term (independent of time horizon) that accounts for the loss for the contexts being unknown and only the distribution being known. To validate the performance of our approach we perform extensive simulations on synthetic data and on real-world maize data collected through the Genomes to Fields (G2F) initiative. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 288,480 |
2407.13881 | Privacy-preserving gradient-based fair federated learning | Federated learning (FL) schemes allow multiple participants to collaboratively train neural networks without the need to directly share the underlying data.However, in early schemes, all participants eventually obtain the same model. Moreover, the aggregation is typically carried out by a third party, who obtains combined gradients or weights, which may reveal the model. These downsides underscore the demand for fair and privacy-preserving FL schemes. Here, collaborative fairness asks for individual model quality depending on the individual data contribution. Privacy is demanded with respect to any kind of data outsourced to the third party. Now, there already exist some approaches aiming for either fair or privacy-preserving FL and a few works even address both features. In our paper, we build upon these seminal works and present a novel, fair and privacy-preserving FL scheme. Our approach, which mainly relies on homomorphic encryption, stands out for exclusively using local gradients. This increases the usability in comparison to state-of-the-art approaches and thereby opens the door to applications in control. | false | false | false | false | false | false | true | false | false | false | true | false | true | false | false | false | false | false | 474,551 |
1809.03481 | Longitudinal Safety Analysis For Heterogeneous Platoon Of Automated And
Human Vehicles | With the recent advancement in environmental sensing, vehicle control and vehicle-infrastructure cooperation technologies, more and more autonomous driving companies start to put their intelligent cars into road test. But in the near future, we will face a heterogeneous traffic with both intelligent connected vehicles and human vehicles. In this paper, we investigated the impacts of four collision avoidance algorithms under different intelligent connected vehicles market penetration rate. A customized simulation platform is built, in which a platoon can be initiated with many key parameters. For every short time interval, the dynamics of vehicles are updated and input in a kinematics model. If a collision occurs, the energy loss is calculated to represent the crash severity. Four collision avoidance algorithms are chosen and compared in terms of the crash rate and severity at different market penetration rate and different locations of the platoon. The results generate interesting debates on the issues of heterogeneous platoon safety. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 107,338 |
2205.06941 | Blockchain Goes Green? Part II: Characterizing the Performance and Cost
of Blockchains on the Cloud and at the Edge | While state-of-the-art permissioned blockchains can achieve thousands of transactions per second on commodity hardware with x86/64 architecture, their performance when running on different architectures is not clear. The goal of this work is to characterize the performance and cost of permissioned blockchains on different hardware systems, which is important as diverse application domains are adopting t. To this end, we conduct extensive cost and performance evaluation of two permissioned blockchains, namely Hyperledger Fabric and ConsenSys Quorum, on five different types of hardware covering both x86/64 and ARM architecture, as well as, both cloud and edge computing. The hardware nodes include servers with Intel Xeon CPU, servers with ARM-based Amazon Graviton CPU, and edge devices with ARM-based CPU. Our results reveal a diverse profile of the two blockchains across different settings, demonstrating the impact of hardware choices on the overall performance and cost. We find that Graviton servers outperform Xeon servers in many settings, due to their powerful CPU and high memory bandwidth. Edge devices with ARM architecture, on the other hand, exhibit low performance. When comparing the cloud with the edge, we show that the cost of the latter is much smaller in the long run if manpower cost is not considered. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 296,412 |
2003.04427 | Transfer Reinforcement Learning under Unobserved Contextual Information | In this paper, we study a transfer reinforcement learning problem where the state transitions and rewards are affected by the environmental context. Specifically, we consider a demonstrator agent that has access to a context-aware policy and can generate transition and reward data based on that policy. These data constitute the experience of the demonstrator. Then, the goal is to transfer this experience, excluding the underlying contextual information, to a learner agent that does not have access to the environmental context, so that they can learn a control policy using fewer samples. It is well known that, disregarding the causal effect of the contextual information, can introduce bias in the transition and reward models estimated by the learner, resulting in a learned suboptimal policy. To address this challenge, in this paper, we develop a method to obtain causal bounds on the transition and reward functions using the demonstrator's data, which we then use to obtain causal bounds on the value functions. Using these value function bounds, we propose new Q learning and UCB-Q learning algorithms that converge to the true value function without bias. We provide numerical experiments for robot motion planning problems that validate the proposed value function bounds and demonstrate that the proposed algorithms can effectively make use of the data from the demonstrator to accelerate the learning process of the learner. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 167,558 |
2306.09462 | Motion Comfort Optimization for Autonomous Vehicles: Concepts, Methods,
and Techniques | This article outlines the architecture of autonomous driving and related complementary frameworks from the perspective of human comfort. The technical elements for measuring Autonomous Vehicle (AV) user comfort and psychoanalysis are listed here. At the same time, this article introduces the technology related to the structure of automatic driving and the reaction time of automatic driving. We also discuss the technical details related to the automatic driving comfort system, the response time of the AV driver, the comfort level of the AV, motion sickness, and related optimization technologies. The function of the sensor is affected by various factors. Since the sensor of automatic driving mainly senses the environment around a vehicle, including "the weather" which introduces the challenges and limitations of second-hand sensors in autonomous vehicles under different weather conditions. The comfort and safety of autonomous driving are also factors that affect the development of autonomous driving technologies. This article further analyzes the impact of autonomous driving on the user's physical and psychological states and how the comfort factors of autonomous vehicles affect the automotive market. Also, part of our focus is on the benefits and shortcomings of autonomous driving. The goal is to present an exhaustive overview of the most relevant technical matters to help researchers and application developers comprehend the different comfort factors and systems of autonomous driving. Finally, we provide detailed automated driving comfort use cases to illustrate the comfort-related issues of autonomous driving. Then, we provide implications and insights for the future of autonomous driving. | false | false | false | false | true | false | true | true | false | false | false | true | false | false | false | false | false | false | 373,839 |
2104.03775 | Geometry-based Distance Decomposition for Monocular 3D Object Detection | Monocular 3D object detection is of great significance for autonomous driving but remains challenging. The core challenge is to predict the distance of objects in the absence of explicit depth information. Unlike regressing the distance as a single variable in most existing methods, we propose a novel geometry-based distance decomposition to recover the distance by its factors. The decomposition factors the distance of objects into the most representative and stable variables, i.e. the physical height and the projected visual height in the image plane. Moreover, the decomposition maintains the self-consistency between the two heights, leading to robust distance prediction when both predicted heights are inaccurate. The decomposition also enables us to trace the causes of the distance uncertainty for different scenarios. Such decomposition makes the distance prediction interpretable, accurate, and robust. Our method directly predicts 3D bounding boxes from RGB images with a compact architecture, making the training and inference simple and efficient. The experimental results show that our method achieves the state-of-the-art performance on the monocular 3D Object Detection and Birds Eye View tasks of the KITTI dataset, and can generalize to images with different camera intrinsics. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 229,171 |
2402.07895 | Detection of Spider Mites on Labrador Beans through Machine Learning
Approaches Using Custom Datasets | Amidst growing food production demands, early plant disease detection is essential to safeguard crops; this study proposes a visual machine learning approach for plant disease detection, harnessing RGB and NIR data collected in real-world conditions through a JAI FS-1600D-10GE camera to build an RGBN dataset. A two-stage early plant disease detection model with YOLOv8 and a sequential CNN was used to train on a dataset with partial labels, which showed a 3.6% increase in mAP compared to a single-stage end-to-end segmentation model. The sequential CNN model achieved 90.62% validation accuracy utilising RGBN data. An average of 6.25% validation accuracy increase is found using RGBN in classification compared to RGB using ResNet15 and the sequential CNN models. Further research and dataset improvements are needed to meet food production demands. | false | false | false | false | true | false | false | true | false | false | false | true | false | false | false | false | false | false | 428,886 |
2104.14029 | Reducing Risk and Uncertainty of Deep Neural Networks on Diagnosing
COVID-19 Infection | Effective and reliable screening of patients via Computer-Aided Diagnosis can play a crucial part in the battle against COVID-19. Most of the existing works focus on developing sophisticated methods yielding high detection performance, yet not addressing the issue of predictive uncertainty. In this work, we introduce uncertainty estimation to detect confusing cases for expert referral to address the unreliability of state-of-the-art (SOTA) DNNs on COVID-19 detection. To the best of our knowledge, we are the first to address this issue on the COVID-19 detection problem. In this work, we investigate a number of SOTA uncertainty estimation methods on publicly available COVID dataset and present our experimental findings. In collaboration with medical professionals, we further validate the results to ensure the viability of the best performing method in clinical practice. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 232,682 |
2310.06873 | A review of uncertainty quantification in medical image analysis:
probabilistic and non-probabilistic methods | The comprehensive integration of machine learning healthcare models within clinical practice remains suboptimal, notwithstanding the proliferation of high-performing solutions reported in the literature. A predominant factor hindering widespread adoption pertains to an insufficiency of evidence affirming the reliability of the aforementioned models. Recently, uncertainty quantification methods have been proposed as a potential solution to quantify the reliability of machine learning models and thus increase the interpretability and acceptability of the result. In this review, we offer a comprehensive overview of prevailing methods proposed to quantify uncertainty inherent in machine learning models developed for various medical image tasks. Contrary to earlier reviews that exclusively focused on probabilistic methods, this review also explores non-probabilistic approaches, thereby furnishing a more holistic survey of research pertaining to uncertainty quantification for machine learning models. Analysis of medical images with the summary and discussion on medical applications and the corresponding uncertainty evaluation protocols are presented, which focus on the specific challenges of uncertainty in medical image analysis. We also highlight some potential future research work at the end. Generally, this review aims to allow researchers from both clinical and technical backgrounds to gain a quick and yet in-depth understanding of the research in uncertainty quantification for medical image analysis machine learning models. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 398,750 |
1805.12064 | Stochastic Deep Compressive Sensing for the Reconstruction of Diffusion
Tensor Cardiac MRI | Understanding the structure of the heart at the microscopic scale of cardiomyocytes and their aggregates provides new insights into the mechanisms of heart disease and enables the investigation of effective therapeutics. Diffusion Tensor Cardiac Magnetic Resonance (DT-CMR) is a unique non-invasive technique that can resolve the microscopic structure, organisation, and integrity of the myocardium without the need for exogenous contrast agents. However, this technique suffers from relatively low signal-to-noise ratio (SNR) and frequent signal loss due to respiratory and cardiac motion. Current DT-CMR techniques rely on acquiring and averaging multiple signal acquisitions to improve the SNR. Moreover, in order to mitigate the influence of respiratory movement, patients are required to perform many breath holds which results in prolonged acquisition durations (e.g., ~30 mins using the existing technology). In this study, we propose a novel cascaded Convolutional Neural Networks (CNN) based compressive sensing (CS) technique and explore its applicability to improve DT-CMR acquisitions. Our simulation based studies have achieved high reconstruction fidelity and good agreement between DT-CMR parameters obtained with the proposed reconstruction and fully sampled ground truth. When compared to other state-of-the-art methods, our proposed deep cascaded CNN method and its stochastic variation demonstrated significant improvements. To the best of our knowledge, this is the first study using deep CNN based CS for the DT-CMR reconstruction. In addition, with relatively straightforward modifications to the acquisition scheme, our method can easily be translated into a method for online, at-the-scanner reconstruction enabling the deployment of accelerated DT-CMR in various clinical applications. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 99,083 |
1709.05038 | Self-Guiding Multimodal LSTM - when we do not have a perfect training
dataset for image captioning | In this paper, a self-guiding multimodal LSTM (sg-LSTM) image captioning model is proposed to handle uncontrolled imbalanced real-world image-sentence dataset. We collect FlickrNYC dataset from Flickr as our testbed with 306,165 images and the original text descriptions uploaded by the users are utilized as the ground truth for training. Descriptions in FlickrNYC dataset vary dramatically ranging from short term-descriptions to long paragraph-descriptions and can describe any visual aspects, or even refer to objects that are not depicted. To deal with the imbalanced and noisy situation and to fully explore the dataset itself, we propose a novel guiding textual feature extracted utilizing a multimodal LSTM (m-LSTM) model. Training of m-LSTM is based on the portion of data in which the image content and the corresponding descriptions are strongly bonded. Afterwards, during the training of sg-LSTM on the rest training data, this guiding information serves as additional input to the network along with the image representations and the ground-truth descriptions. By integrating these input components into a multimodal block, we aim to form a training scheme with the textual information tightly coupled with the image content. The experimental results demonstrate that the proposed sg-LSTM model outperforms the traditional state-of-the-art multimodal RNN captioning framework in successfully describing the key components of the input images. | false | false | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | 80,771 |
1806.11532 | TextWorld: A Learning Environment for Text-based Games | We introduce TextWorld, a sandbox learning environment for the training and evaluation of RL agents on text-based games. TextWorld is a Python library that handles interactive play-through of text games, as well as backend functions like state tracking and reward assignment. It comes with a curated list of games whose features and challenges we have analyzed. More significantly, it enables users to handcraft or automatically generate new games. Its generative mechanisms give precise control over the difficulty, scope, and language of constructed games, and can be used to relax challenges inherent to commercial text games like partial observability and sparse rewards. By generating sets of varied but similar games, TextWorld can also be used to study generalization and transfer learning. We cast text-based games in the Reinforcement Learning formalism, use our framework to develop a set of benchmark games, and evaluate several baseline agents on this set and the curated list. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 101,735 |
2501.04477 | Rethinking High-speed Image Reconstruction Framework with Spike Camera | Spike cameras, as innovative neuromorphic devices, generate continuous spike streams to capture high-speed scenes with lower bandwidth and higher dynamic range than traditional RGB cameras. However, reconstructing high-quality images from the spike input under low-light conditions remains challenging. Conventional learning-based methods often rely on the synthetic dataset as the supervision for training. Still, these approaches falter when dealing with noisy spikes fired under the low-light environment, leading to further performance degradation in the real-world dataset. This phenomenon is primarily due to inadequate noise modelling and the domain gap between synthetic and real datasets, resulting in recovered images with unclear textures, excessive noise, and diminished brightness. To address these challenges, we introduce a novel spike-to-image reconstruction framework SpikeCLIP that goes beyond traditional training paradigms. Leveraging the CLIP model's powerful capability to align text and images, we incorporate the textual description of the captured scene and unpaired high-quality datasets as the supervision. Our experiments on real-world low-light datasets U-CALTECH and U-CIFAR demonstrate that SpikeCLIP significantly enhances texture details and the luminance balance of recovered images. Furthermore, the reconstructed images are well-aligned with the broader visual features needed for downstream tasks, ensuring more robust and versatile performance in challenging environments. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 523,238 |
1411.3334 | Sparse Quantum Codes from Quantum Circuits | We describe a general method for turning quantum circuits into sparse quantum subsystem codes. The idea is to turn each circuit element into a set of low-weight gauge generators that enforce the input-output relations of that circuit element. Using this prescription, we can map an arbitrary stabilizer code into a new subsystem code with the same distance and number of encoded qubits but where all the generators have constant weight, at the cost of adding some ancilla qubits. With an additional overhead of ancilla qubits, the new code can also be made spatially local. Applying our construction to certain concatenated stabilizer codes yields families of subsystem codes with constant-weight generators and with minimum distance $d = n^{1-\epsilon}$, where $\epsilon = O(1/\sqrt{\log n})$. For spatially local codes in $D$ dimensions we nearly saturate a bound due to Bravyi and Terhal and achieve $d = n^{1-\epsilon-1/D}$. Previously the best code distance achievable with constant-weight generators in any dimension, due to Freedman, Meyer and Luo, was $O(\sqrt{n\log n})$ for a stabilizer code. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 37,493 |
1407.8147 | Stochastic Coordinate Coding and Its Application for Drosophila Gene
Expression Pattern Annotation | \textit{Drosophila melanogaster} has been established as a model organism for investigating the fundamental principles of developmental gene interactions. The gene expression patterns of \textit{Drosophila melanogaster} can be documented as digital images, which are annotated with anatomical ontology terms to facilitate pattern discovery and comparison. The automated annotation of gene expression pattern images has received increasing attention due to the recent expansion of the image database. The effectiveness of gene expression pattern annotation relies on the quality of feature representation. Previous studies have demonstrated that sparse coding is effective for extracting features from gene expression images. However, solving sparse coding remains a computationally challenging problem, especially when dealing with large-scale data sets and learning large size dictionaries. In this paper, we propose a novel algorithm to solve the sparse coding problem, called Stochastic Coordinate Coding (SCC). The proposed algorithm alternatively updates the sparse codes via just a few steps of coordinate descent and updates the dictionary via second order stochastic gradient descent. The computational cost is further reduced by focusing on the non-zero components of the sparse codes and the corresponding columns of the dictionary only in the updating procedure. Thus, the proposed algorithm significantly improves the efficiency and the scalability, making sparse coding applicable for large-scale data sets and large dictionary sizes. Our experiments on Drosophila gene expression data sets demonstrate the efficiency and the effectiveness of the proposed algorithm. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 35,010 |
2110.11525 | Digital and Physical-World Attacks on Remote Pulse Detection | Remote photoplethysmography (rPPG) is a technique for estimating blood volume changes from reflected light without the need for a contact sensor. We present the first examples of presentation attacks in the digital and physical domains on rPPG from face video. Digital attacks are easily performed by adding imperceptible periodic noise to the input videos. Physical attacks are performed with illumination from visible spectrum LEDs placed in close proximity to the face, while still being difficult to perceive with the human eye. We also show that our attacks extend beyond medical applications, since the method can effectively generate a strong periodic pulse on 3D-printed face masks, which presents difficulties for pulse-based face presentation attack detection (PAD). The paper concludes with ideas for using this work to improve robustness of rPPG methods and pulse-based face PAD. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 262,512 |
1902.03326 | Architecture Compression | In this paper we propose a novel approach to model compression termed Architecture Compression. Instead of operating on the weight or filter space of the network like classical model compression methods, our approach operates on the architecture space. A 1-D CNN encoder-decoder is trained to learn a mapping from discrete architecture space to a continuous embedding and back. Additionally, this embedding is jointly trained to regress accuracy and parameter count in order to incorporate information about the architecture's effectiveness on the dataset. During the compression phase, we first encode the network and then perform gradient descent in continuous space to optimize a compression objective function that maximizes accuracy and minimizes parameter count. The final continuous feature is then mapped to a discrete architecture using the decoder. We demonstrate the merits of this approach on visual recognition tasks such as CIFAR-10, CIFAR-100, Fashion-MNIST and SVHN and achieve a greater than 20x compression on CIFAR-10. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | true | false | false | 121,070 |
2012.02300 | Fully Convolutional Network Bootstrapped by Word Encoding and Embedding
for Activity Recognition in Smart Homes | Activity recognition in smart homes is essential when we wish to propose automatic services for the inhabitants. However, it poses challenges in terms of variability of the environment, sensorimotor system, but also user habits. Therefore, endto-end systems fail at automatically extracting key features, without extensive pre-processing. We propose to tackle feature extraction for activity recognition in smart homes by merging methods from the Natural Language Processing (NLP) and the Time Series Classification (TSC) domains. We evaluate the performance of our method on two datasets issued from the Center for Advanced Studies in Adaptive Systems (CASAS). Moreover, we analyze the contributions of the use of NLP encoding Bag-Of-Word with Embedding as well as the ability of the FCN algorithm to automatically extract features and classify. The method we propose shows good performance in offline activity classification. Our analysis also shows that FCN is a suitable algorithm for smart home activity recognition and hightlights the advantages of automatic feature extraction. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 209,720 |
2112.07909 | Homography Decomposition Networks for Planar Object Tracking | Planar object tracking plays an important role in AI applications, such as robotics, visual servoing, and visual SLAM. Although the previous planar trackers work well in most scenarios, it is still a challenging task due to the rapid motion and large transformation between two consecutive frames. The essential reason behind this problem is that the condition number of such a non-linear system changes unstably when the searching range of the homography parameter space becomes larger. To this end, we propose a novel Homography Decomposition Networks(HDN) approach that drastically reduces and stabilizes the condition number by decomposing the homography transformation into two groups. Specifically, a similarity transformation estimator is designed to predict the first group robustly by a deep convolution equivariant network. By taking advantage of the scale and rotation estimation with high confidence, a residual transformation is estimated by a simple regression model. Furthermore, the proposed end-to-end network is trained in a semi-supervised fashion. Extensive experiments show that our proposed approach outperforms the state-of-the-art planar tracking methods at a large margin on the challenging POT, UCSB and POIC datasets. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 271,635 |
2311.13199 | Two-stage Synthetic Supervising and Multi-view Consistency
Self-supervising based Animal 3D Reconstruction by Single Image | Pixel-aligned Implicit Function (PIFu) effectively captures subtle variations in body shape within a low-dimensional space through extensive training with human 3D scans, its application to live animals presents formidable challenges due to the difficulty of obtaining animal cooperation for 3D scanning. To address this challenge, we propose the combination of two-stage supervised and self-supervised training to address the challenge of obtaining animal cooperation for 3D scanning. In the first stage, we leverage synthetic animal models for supervised learning. This allows the model to learn from a diverse set of virtual animal instances. In the second stage, we use 2D multi-view consistency as a self-supervised training method. This further enhances the model's ability to reconstruct accurate and realistic 3D shape and texture from largely available single-view images of real animals. The results of our study demonstrate that our approach outperforms state-of-the-art methods in both quantitative and qualitative aspects of bird 3D digitization. The source code is available at https://github.com/kuangzijian/drifu-for-animals. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 409,664 |
1106.5294 | Set systems: order types, continuous nondeterministic deformations, and
quasi-orders | By reformulating a learning process of a set system L as a game between Teacher and Learner, we define the order type of L to be the order type of the game tree, if the tree is well-founded. The features of the order type of L (dim L in symbol) are (1) We can represent any well-quasi-order (wqo for short) by the set system L of the upper-closed sets of the wqo such that the maximal order type of the wqo is equal to dim L. (2) dim L is an upper bound of the mind-change complexity of L. dim L is defined iff L has a finite elasticity (fe for short), where, according to computational learning theory, if an indexed family of recursive languages has fe then it is learnable by an algorithm from positive data. Regarding set systems as subspaces of Cantor spaces, we prove that fe of set systems is preserved by any continuous function which is monotone with respect to the set-inclusion. By it, we prove that finite elasticity is preserved by various (nondeterministic) language operators (Kleene-closure, shuffle-closure, union, product, intersection,. . ..) The monotone continuous functions represent nondeterministic computations. If a monotone continuous function has a computation tree with each node followed by at most n immediate successors and the order type of a set system L is {\alpha}, then the direct image of L is a set system of order type at most n-adic diagonal Ramsey number of {\alpha}. Furthermore, we provide an order-type-preserving contravariant embedding from the category of quasi-orders and finitely branching simulations between them, into the complete category of subspaces of Cantor spaces and monotone continuous functions having Girard's linearity between them. Keyword: finite elasticity, shuffle-closure | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 11,017 |
1611.07329 | Autonomous Landing of a Multirotor Micro Air Vehicle on a High Velocity
Ground Vehicle | While autonomous multirotor micro aerial vehicles (MAVs) are uniquely well suited for certain types of missions benefiting from stationary flight capabilities, their more widespread usage still faces many hurdles, due in particular to their limited range and the difficulty of fully automating their deployment and retrieval. In this paper we address these issues by solving the problem of the automated landing of a quadcopter on a ground vehicle moving at relatively high speed. We present our system architecture, including the structure of our Kalman filter for the estimation of the relative position and velocity between the quadcopter and the landing pad, as well as our controller design for the full rendezvous and landing maneuvers. The system is experimentally validated by successfully landing in multiple trials a commercial quadcopter on the roof of a car moving at speeds of up to 50 km/h. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 64,340 |
2407.19628 | Text2LiDAR: Text-guided LiDAR Point Cloud Generation via Equirectangular
Transformer | The complex traffic environment and various weather conditions make the collection of LiDAR data expensive and challenging. Achieving high-quality and controllable LiDAR data generation is urgently needed, controlling with text is a common practice, but there is little research in this field. To this end, we propose Text2LiDAR, the first efficient, diverse, and text-controllable LiDAR data generation model. Specifically, we design an equirectangular transformer architecture, utilizing the designed equirectangular attention to capture LiDAR features in a manner with data characteristics. Then, we design a control-signal embedding injector to efficiently integrate control signals through the global-to-focused attention mechanism. Additionally, we devise a frequency modulator to assist the model in recovering high-frequency details, ensuring the clarity of the generated point cloud. To foster development in the field and optimize text-controlled generation performance, we construct nuLiDARtext which offers diverse text descriptors for 34,149 LiDAR point clouds from 850 scenes. Experiments on uncontrolled and text-controlled generation in various forms on KITTI-360 and nuScenes datasets demonstrate the superiority of our approach. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 476,857 |
2002.06605 | Fully Distributed Resilient State Estimation based on Distributed Median
Solver | In this paper, we present a scheme of fully distributed resilient state estimation for linear dynamical systems under sensor attacks. The proposed state observer consists of a network of local observers, where each of them utilizes local measurements and information transmitted from the neighbors. As a fully distributed scheme, it does not necessarily collect a majority of sensing data for the sake of attack identification, while the compromised sensors are eventually identified by the distributed network and excluded from the observers. For this, the overall network (not the individual local observer) is assumed to have redundant sensors and assumed to be connected. The proposed scheme is based on a novel design of a distributed median solver, which approximately recovers the median value of local estimates. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 164,244 |
2112.11909 | Few-shot Multi-hop Question Answering over Knowledge Base | KBQA is a task that requires to answer questions by using semantic structured information in knowledge base. Previous work in this area has been restricted due to the lack of large semantic parsing dataset and the exponential growth of searching space with the increasing hops of relation paths. In this paper, we propose an efficient pipeline method equipped with a pre-trained language model. By adopting Beam Search algorithm, the searching space will not be restricted in subgraph of 3 hops. Besides, we propose a data generation strategy, which enables our model to generalize well from few training samples. We evaluate our model on an open-domain complex Chinese Question Answering task CCKS2019 and achieve F1-score of 62.55% on the test dataset. In addition, in order to test the few-shot learning capability of our model, we ramdomly select 10% of the primary data to train our model, the result shows that our model can still achieves F1-score of 58.54%, which verifies the capability of our model to process KBQA task and the advantage in few-shot Learning. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 272,827 |
2011.10679 | Cost-Effective Quasi-Parallel Sensing Instrumentation for Industrial
Chemical Species Tomography | Chemical Species Tomography (CST) has been widely applied for imaging of critical gas-phase parameters in industrial processes. To acquire high-fidelity images, CST is typically implemented by line-of-sight Wavelength Modulation Spectroscopy (WMS) measurements from multiple laser beams. The modulated transmission signal on each laser beam needs to be a) digitised by a high-speed analogue-to-digital converter (ADC); b) demodulated by a digital lock-in (DLI) module; and c) transferred to high-level processor for image reconstruction. Although a fully parallel data acquisition (DAQ) and signal processing system can achieve these functionalities with maximised temporal response, it leads to a highly complex, expensive and power-consuming instrumentation system with high potential for inconsistency between the sampled beams due to the electronics alone. In addition, the huge amount of spectral data sampled in parallel significantly burdens the communication process in industrial applications where in situ signal digitisation is distanced from the high-level data processing. To address these issues, a quasi-parallel sensing technique and electronic circuits were developed for industrial CST, in which the digitisation and demodulation of the multi-beam transmission signals are multiplexed over the high-frequency modulation within a wavelength scan. Our development not only maintains the temporal response of the fully parallel sensing scheme, but also facilitates the cost-effective implementation of industrial CST with very low complexity and reduced load on data transfer. The proposed technique is analytically proven, numerically examined by noise-contaminated CST simulations, and experimentally validated using a lab-scale CST system with 32 laser beams. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 207,580 |
2208.09292 | UnCommonSense: Informative Negative Knowledge about Everyday Concepts | Commonsense knowledge about everyday concepts is an important asset for AI applications, such as question answering and chatbots. Recently, we have seen an increasing interest in the construction of structured commonsense knowledge bases (CSKBs). An important part of human commonsense is about properties that do not apply to concepts, yet existing CSKBs only store positive statements. Moreover, since CSKBs operate under the open-world assumption, absent statements are considered to have unknown truth rather than being invalid. This paper presents the UNCOMMONSENSE framework for materializing informative negative commonsense statements. Given a target concept, comparable concepts are identified in the CSKB, for which a local closed-world assumption is postulated. This way, positive statements about comparable concepts that are absent for the target concept become seeds for negative statement candidates. The large set of candidates is then scrutinized, pruned and ranked by informativeness. Intrinsic and extrinsic evaluations show that our method significantly outperforms the state-of-the-art. A large dataset of informative negations is released as a resource for future research. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 313,651 |
2305.08227 | DeepFilterNet: Perceptually Motivated Real-Time Speech Enhancement | Multi-frame algorithms for single-channel speech enhancement are able to take advantage from short-time correlations within the speech signal. Deep Filtering (DF) was proposed to directly estimate a complex filter in frequency domain to take advantage of these correlations. In this work, we present a real-time speech enhancement demo using DeepFilterNet. DeepFilterNet's efficiency is enabled by exploiting domain knowledge of speech production and psychoacoustic perception. Our model is able to match state-of-the-art speech enhancement benchmarks while achieving a real-time-factor of 0.19 on a single threaded notebook CPU. The framework as well as pretrained weights have been published under an open source license. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 364,206 |
1705.01013 | Quantum Mechanical Approach to Modelling Reliability of Sensor Reports | Dempster-Shafer evidence theory is wildly applied in multi-sensor data fusion. However, lots of uncertainty and interference exist in practical situation, especially in the battle field. It is still an open issue to model the reliability of sensor reports. Many methods are proposed based on the relationship among collected data. In this letter, we proposed a quantum mechanical approach to evaluate the reliability of sensor reports, which is based on the properties of a sensor itself. The proposed method is used to modify the combining of evidences. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | true | 72,782 |
2204.04431 | A Spiking Neural Network Structure Implementing Reinforcement Learning | At present, implementation of learning mechanisms in spiking neural networks (SNN) cannot be considered as a solved scientific problem despite plenty of SNN learning algorithms proposed. It is also true for SNN implementation of reinforcement learning (RL), while RL is especially important for SNNs because of its close relationship to the domains most promising from the viewpoint of SNN application such as robotics. In the present paper, I describe an SNN structure which, seemingly, can be used in wide range of RL tasks. The distinctive feature of my approach is usage of only the spike forms of all signals involved - sensory input streams, output signals sent to actuators and reward/punishment signals. Besides that, selecting the neuron/plasticity models, I was guided by the requirement that they should be easily implemented on modern neurochips. The SNN structure considered in the paper includes spiking neurons described by a generalization of the LIFAT (leaky integrate-and-fire neuron with adaptive threshold) model and a simple spike timing dependent synaptic plasticity model (a generalization of dopamine-modulated plasticity). My concept is based on very general assumptions about RL task characteristics and has no visible limitations on its applicability. To test it, I selected a simple but non-trivial task of training the network to keep a chaotically moving light spot in the view field of an emulated DVS camera. Successful solution of this RL problem by the SNN described can be considered as evidence in favor of efficiency of my approach. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 290,655 |
2208.06917 | MTCSNN: Multi-task Clinical Siamese Neural Network for Diabetic
Retinopathy Severity Prediction | Diabetic Retinopathy (DR) has become one of the leading causes of vision impairment in working-aged people and is a severe problem worldwide. However, most of the works ignored the ordinal information of labels. In this project, we propose a novel design MTCSNN, a Multi-task Clinical Siamese Neural Network for Diabetic Retinopathy severity prediction task. The novelty of this project is to utilize the ordinal information among labels and add a new regression task, which can help the model learn more discriminative feature embedding for fine-grained classification tasks. We perform comprehensive experiments over the RetinaMNIST, comparing MTCSNN with other models like ResNet-18, 34, 50. Our results indicate that MTCSNN outperforms the benchmark models in terms of AUC and accuracy on the test dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 312,865 |
2305.18365 | What can Large Language Models do in chemistry? A comprehensive
benchmark on eight tasks | Large Language Models (LLMs) with strong abilities in natural language processing tasks have emerged and have been applied in various kinds of areas such as science, finance and software engineering. However, the capability of LLMs to advance the field of chemistry remains unclear. In this paper, rather than pursuing state-of-the-art performance, we aim to evaluate capabilities of LLMs in a wide range of tasks across the chemistry domain. We identify three key chemistry-related capabilities including understanding, reasoning and explaining to explore in LLMs and establish a benchmark containing eight chemistry tasks. Our analysis draws on widely recognized datasets facilitating a broad exploration of the capacities of LLMs within the context of practical chemistry. Five LLMs (GPT-4, GPT-3.5, Davinci-003, Llama and Galactica) are evaluated for each chemistry task in zero-shot and few-shot in-context learning settings with carefully selected demonstration examples and specially crafted prompts. Our investigation found that GPT-4 outperformed other models and LLMs exhibit different competitive levels in eight chemistry tasks. In addition to the key findings from the comprehensive benchmark analysis, our work provides insights into the limitation of current LLMs and the impact of in-context learning settings on LLMs' performance across various chemistry tasks. The code and datasets used in this study are available at https://github.com/ChemFoundationModels/ChemLLMBench. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 368,987 |
1312.4036 | Mind Your Language: Effects of Spoken Query Formulation on Retrieval
Effectiveness | Voice search is becoming a popular mode for interacting with search engines. As a result, research has gone into building better voice transcription engines, interfaces, and search engines that better handle inherent verbosity of queries. However, when one considers its use by non- native speakers of English, another aspect that becomes important is the formulation of the query by users. In this paper, we present the results of a preliminary study that we conducted with non-native English speakers who formulate queries for given retrieval tasks. Our results show that the current search engines are sensitive in their rankings to the query formulation, and thus highlights the need for developing more robust ranking methods. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 29,092 |
1602.03602 | Wireless Communications with Unmanned Aerial Vehicles: Opportunities and
Challenges | Wireless communication systems that include unmanned aerial vehicles (UAVs) promise to provide cost-effective wireless connectivity for devices without infrastructure coverage. Compared to terrestrial communications or those based on high-altitude platforms (HAPs), on-demand wireless systems with low-altitude UAVs are in general faster to deploy, more flexibly re-configured, and are likely to have better communication channels due to the presence of short-range line-of-sight (LoS) links. However, the utilization of highly mobile and energy-constrained UAVs for wireless communications also introduces many new challenges. In this article, we provide an overview of UAV-aided wireless communications, by introducing the basic networking architecture and main channel characteristics, highlighting the key design considerations as well as the new opportunities to be exploited. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 52,021 |
2003.05882 | Stackelberg Equilibria for Two-Player Network Routing Games on Parallel
Networks | We consider a two-player zero-sum network routing game in which a router wants to maximize the amount of legitimate traffic that flows from a given source node to a destination node and an attacker wants to block as much legitimate traffic as possible by flooding the network with malicious traffic. We address scenarios with asymmetric information, in which the router must reveal its policy before the attacker decides how to distribute the malicious traffic among the network links, which is naturally modeled by the notion of Stackelberg equilibria. The paper focuses on parallel networks, and includes three main contributions: we show that computing the optimal attack policy against a given routing policy is an NP-hard problem; we establish conditions under which the Stackelberg equilibria lead to no regret; and we provide a metric that can be used to quantify how uncertainty about the attacker's capabilities limits the router's performance. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 167,973 |
2006.06780 | Tangent Space Sensitivity and Distribution of Linear Regions in ReLU
Networks | Recent articles indicate that deep neural networks are efficient models for various learning problems. However they are often highly sensitive to various changes that cannot be detected by an independent observer. As our understanding of deep neural networks with traditional generalization bounds still remains incomplete, there are several measures which capture the behaviour of the model in case of small changes at a specific state. In this paper we consider adversarial stability in the tangent space and suggest tangent sensitivity in order to characterize stability. We focus on a particular kind of stability with respect to changes in parameters that are induced by individual examples without known labels. We derive several easily computable bounds and empirical measures for feed-forward fully connected ReLU (Rectified Linear Unit) networks and connect tangent sensitivity to the distribution of the activation regions in the input space realized by the network. Our experiments suggest that even simple bounds and measures are associated with the empirical generalization gap. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 181,549 |
2211.13061 | A Masked Face Classification Benchmark on Low-Resolution Surveillance
Images | We propose a novel image dataset focused on tiny faces wearing face masks for mask classification purposes, dubbed Small Face MASK (SF-MASK), composed of a collection made from 20k low-resolution images exported from diverse and heterogeneous datasets, ranging from 7 x 7 to 64 x 64 pixel resolution. An accurate visualization of this collection, through counting grids, made it possible to highlight gaps in the variety of poses assumed by the heads of the pedestrians. In particular, faces filmed by very high cameras, in which the facial features appear strongly skewed, are absent. To address this structural deficiency, we produced a set of synthetic images which resulted in a satisfactory covering of the intra-class variance. Furthermore, a small subsample of 1701 images contains badly worn face masks, opening to multi-class classification challenges. Experiments on SF-MASK focus on face mask classification using several classifiers. Results show that the richness of SF-MASK (real + synthetic images) leads all of the tested classifiers to perform better than exploiting comparative face mask datasets, on a fixed 1077 images testing set. Dataset and evaluation code are publicly available here: https://github.com/HumaticsLAB/sf-mask | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 332,336 |
2305.05486 | MAUPQA: Massive Automatically-created Polish Question Answering Dataset | Recently, open-domain question answering systems have begun to rely heavily on annotated datasets to train neural passage retrievers. However, manually annotating such datasets is both difficult and time-consuming, which limits their availability for less popular languages. In this work, we experiment with several methods for automatically collecting weakly labeled datasets and show how they affect the performance of the neural passage retrieval models. As a result of our work, we publish the MAUPQA dataset, consisting of nearly 400,000 question-passage pairs for Polish, as well as the HerBERT-QA neural retriever. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 363,170 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.