id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2412.17290 | Free-viewpoint Human Animation with Pose-correlated Reference Selection | Diffusion-based human animation aims to animate a human character based on a source human image as well as driving signals such as a sequence of poses. Leveraging the generative capacity of diffusion model, existing approaches are able to generate high-fidelity poses, but struggle with significant viewpoint changes, especially in zoom-in/zoom-out scenarios where camera-character distance varies. This limits the applications such as cinematic shot type plan or camera control. We propose a pose-correlated reference selection diffusion network, supporting substantial viewpoint variations in human animation. Our key idea is to enable the network to utilize multiple reference images as input, since significant viewpoint changes often lead to missing appearance details on the human body. To eliminate the computational cost, we first introduce a novel pose correlation module to compute similarities between non-aligned target and source poses, and then propose an adaptive reference selection strategy, utilizing the attention map to identify key regions for animation generation. To train our model, we curated a large dataset from public TED talks featuring varied shots of the same character, helping the model learn synthesis for different perspectives. Our experimental results show that with the same number of reference images, our model performs favorably compared to the current SOTA methods under large viewpoint change. We further show that the adaptive reference selection is able to choose the most relevant reference regions to generate humans under free viewpoints. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 519,910 |
2302.02596 | Ten Lessons We Have Learned in the New "Sparseland": A Short Handbook
for Sparse Neural Network Researchers | This article does not propose any novel algorithm or new hardware for sparsity. Instead, it aims to serve the "common good" for the increasingly prosperous Sparse Neural Network (SNN) research community. We attempt to summarize some most common confusions in SNNs, that one may come across in various scenarios such as paper review/rebuttal and talks - many drawn from the authors' own bittersweet experiences! We feel that doing so is meaningful and timely, since the focus of SNN research is notably shifting from traditional pruning to more diverse and profound forms of sparsity before, during, and after training. The intricate relationships between their scopes, assumptions, and approaches lead to misunderstandings, for non-experts or even experts in SNNs. In response, we summarize ten Q\&As of SNNs from many key aspects, including dense vs. sparse, unstructured sparse vs. structured sparse, pruning vs. sparse training, dense-to-sparse training vs. sparse-to-sparse training, static sparsity vs. dynamic sparsity, before-training/during-training vs. post-training sparsity, and many more. We strive to provide proper and generically applicable answers to clarify those confusions to the best extent possible. We hope our summary provides useful general knowledge for people who want to enter and engage with this exciting community; and also provides some "mind of ease" convenience for SNN researchers to explain their work in the right contexts. At the very least (and perhaps as this article's most insignificant target functionality), if you are writing/planning to write a paper or rebuttal in the field of SNNs, we hope some of our answers could help you! | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 344,057 |
2308.14845 | SMOClust: Synthetic Minority Oversampling based on Stream Clustering for
Evolving Data Streams | Many real-world data stream applications not only suffer from concept drift but also class imbalance. Yet, very few existing studies investigated this joint challenge. Data difficulty factors, which have been shown to be key challenges in class imbalanced data streams, are not taken into account by existing approaches when learning class imbalanced data streams. In this work, we propose a drift adaptable oversampling strategy to synthesise minority class examples based on stream clustering. The motivation is that stream clustering methods continuously update themselves to reflect the characteristics of the current underlying concept, including data difficulty factors. This nature can potentially be used to compress past information without caching data in the memory explicitly. Based on the compressed information, synthetic examples can be created within the region that recently generated new minority class examples. Experiments with artificial and real-world data streams show that the proposed approach can handle concept drift involving different minority class decomposition better than existing approaches, especially when the data stream is severely class imbalanced and presenting high proportions of safe and borderline minority class examples. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | 388,466 |
2203.08215 | Auto-Gait: Automatic Ataxia Risk Assessment with Computer Vision on Gait
Task Videos | In this paper, we investigated whether we can 1) detect participants with ataxia-specific gait characteristics (risk-prediction), and 2) assess severity of ataxia from gait (severity-assessment) using computer vision. We created a dataset of 155 videos from 89 participants, 24 controls and 65 diagnosed with (or are pre-manifest) spinocerebellar ataxias (SCAs), performing the gait task of the Scale for the Assessment and Rating of Ataxia (SARA) from 11 medical sites located in 8 different states across the United States. We develop a computer vision pipeline to detect, track, and separate out the participants from their surroundings and construct several features from their body pose coordinates to capture gait characteristics like step width, step length, swing, stability, speed, etc. Our risk-prediction model achieves 83.06% accuracy and an 80.23% F1 score. Similarly, our severity-assessment model achieves a mean absolute error (MAE) score of 0.6225 and a Pearson's correlation coefficient score of 0.7268. Our models still performed competitively when evaluated on data from sites not used during training. Furthermore, through feature importance analysis, we found that our models associate wider steps, decreased walking speed, and increased instability with greater ataxia severity, which is consistent with previously established clinical knowledge. Our models create possibilities for remote ataxia assessment in non-clinical settings in the future, which could significantly improve accessibility of ataxia care. Furthermore, our underlying dataset was assembled from a geographically diverse cohort, highlighting its potential to further increase equity. The code used in this study is open to the public, and the anonymized body pose landmark dataset is also available upon request. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 285,712 |
2410.14248 | Addressing Blind Guessing: Calibration of Selection Bias in
Multiple-Choice Question Answering by Video Language Models | Evaluating Video Language Models (VLMs) is a challenging task. Due to its transparency, Multiple-Choice Question Answering (MCQA) is widely used to measure the performance of these models through accuracy. However, existing MCQA benchmarks fail to capture the full reasoning capabilities of VLMs due to selection bias, when models disproportionately favor certain answer options based on positional patterns observed during training. In this work, we conduct a comprehensive empirical analysis of several VLM architectures across major datasets designed to assess complex video-focused reasoning. We identify where the bias is most pronounced and demonstrate to what extent model responses reflect genuine understanding of video content and related questions, as opposed to reliance on arbitrary patterns or superficial cues, such as answer position. By decomposing the MCQA task and adapting fairness bias metrics to VLMs, we introduce a post-processing calibration technique BOLD to balance this bias. Our results show that reducing selection bias improves not only debiasing metrics but also overall model performance, including Accuracy and F1 Mean score. Our method, by suppressing "blind guessing", offers a more cost- and time-effective approach to mitigating selection bias compared to existing techniques. This study represents the first focused investigation of selection bias in video-to-text LLM-powered models. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 499,952 |
2309.13516 | InSpaceType: Reconsider Space Type in Indoor Monocular Depth Estimation | Indoor monocular depth estimation has attracted increasing research interest. Most previous works have been focusing on methodology, primarily experimenting with NYU-Depth-V2 (NYUv2) Dataset, and only concentrated on the overall performance over the test set. However, little is known regarding robustness and generalization when it comes to applying monocular depth estimation methods to real-world scenarios where highly varying and diverse functional \textit{space types} are present such as library or kitchen. A study for performance breakdown into space types is essential to realize a pretrained model's performance variance. To facilitate our investigation for robustness and address limitations of previous works, we collect InSpaceType, a high-quality and high-resolution RGBD dataset for general indoor environments. We benchmark 12 recent methods on InSpaceType and find they severely suffer from performance imbalance concerning space types, which reveals their underlying bias. We extend our analysis to 4 other datasets, 3 mitigation approaches, and the ability to generalize to unseen space types. Our work marks the first in-depth investigation of performance imbalance across space types for indoor monocular depth estimation, drawing attention to potential safety concerns for model deployment without considering space types, and further shedding light on potential ways to improve robustness. See \url{https://depthcomputation.github.io/DepthPublic} for data and the supplementary document. The benchmark list on the GitHub project page keeps updates for the lastest monocular depth estimation methods. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 394,241 |
1510.02873 | Group testing schemes from codes and designs | In group testing, simple binary-output tests are designed to identify a small number $t$ of defective items that are present in a large population of $N$ items. Each test takes as input a group of items and produces a binary output indicating whether the group is free of the defective items or contains one or more of them. In this paper we study a relaxation of the combinatorial group testing problem. A matrix is called $(t,\epsilon)$-disjunct if it gives rise to a nonadaptive group testing scheme with the property of identifying a uniformly random $t$-set of defective subjects out of a population of size $N$ with false positive probability of an item at most $\epsilon$. We establish a new connection between $(t,\epsilon)$-disjunct matrices and error correcting codes based on the dual distance of the codes and derive estimates of the parameters of codes that give rise to such schemes. Our methods rely on the moments of the distance distribution of codes and inequalities for moments of sums of independent random variables. We also provide a new connection between group testing schemes and combinatorial designs. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 47,767 |
1412.7003 | A Bayesian encourages dropout | Dropout is one of the key techniques to prevent the learning from overfitting. It is explained that dropout works as a kind of modified L2 regularization. Here, we shed light on the dropout from Bayesian standpoint. Bayesian interpretation enables us to optimize the dropout rate, which is beneficial for learning of weight parameters and prediction after learning. The experiment result also encourages the optimization of the dropout. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 38,745 |
2106.10705 | Automated Deepfake Detection | In this paper, we propose to utilize Automated Machine Learning to adaptively search a neural architecture for deepfake detection. This is the first time to employ automated machine learning for deepfake detection. Based on our explored search space, our proposed method achieves competitive prediction accuracy compared to previous methods. To improve the generalizability of our method, especially when training data and testing data are manipulated by different methods, we propose a simple yet effective strategy in our network learning process: making it to estimate potential manipulation regions besides predicting the real/fake labels. Unlike previous works manually design neural networks, our method can relieve us from the high labor cost in network construction. More than that, compared to previous works, our method depends much less on prior knowledge, e.g., which manipulation method is utilized or where exactly the fake image is manipulated. Extensive experimental results on two benchmark datasets demonstrate the effectiveness of our proposed method for deepfake detection. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 242,125 |
2207.06403 | 3D Concept Grounding on Neural Fields | In this paper, we address the challenging problem of 3D concept grounding (i.e. segmenting and learning visual concepts) by looking at RGBD images and reasoning about paired questions and answers. Existing visual reasoning approaches typically utilize supervised methods to extract 2D segmentation masks on which concepts are grounded. In contrast, humans are capable of grounding concepts on the underlying 3D representation of images. However, traditionally inferred 3D representations (e.g., point clouds, voxelgrids, and meshes) cannot capture continuous 3D features flexibly, thus making it challenging to ground concepts to 3D regions based on the language description of the object being referred to. To address both issues, we propose to leverage the continuous, differentiable nature of neural fields to segment and learn concepts. Specifically, each 3D coordinate in a scene is represented as a high-dimensional descriptor. Concept grounding can then be performed by computing the similarity between the descriptor vector of a 3D coordinate and the vector embedding of a language concept, which enables segmentations and concept learning to be jointly learned on neural fields in a differentiable fashion. As a result, both 3D semantic and instance segmentations can emerge directly from question answering supervision using a set of defined neural operators on top of neural fields (e.g., filtering and counting). Experimental results show that our proposed framework outperforms unsupervised/language-mediated segmentation models on semantic and instance segmentation tasks, as well as outperforms existing models on the challenging 3D aware visual reasoning tasks. Furthermore, our framework can generalize well to unseen shape categories and real scans. | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | true | 307,869 |
2306.05675 | Illumination Controllable Dehazing Network based on Unsupervised Retinex
Embedding | On the one hand, the dehazing task is an illposedness problem, which means that no unique solution exists. On the other hand, the dehazing task should take into account the subjective factor, which is to give the user selectable dehazed images rather than a single result. Therefore, this paper proposes a multi-output dehazing network by introducing illumination controllable ability, called IC-Dehazing. The proposed IC-Dehazing can change the illumination intensity by adjusting the factor of the illumination controllable module, which is realized based on the interpretable Retinex theory. Moreover, the backbone dehazing network of IC-Dehazing consists of a Transformer with double decoders for high-quality image restoration. Further, the prior-based loss function and unsupervised training strategy enable IC-Dehazing to complete the parameter learning process without the need for paired data. To demonstrate the effectiveness of the proposed IC-Dehazing, quantitative and qualitative experiments are conducted on image dehazing, semantic segmentation, and object detection tasks. Code is available at https://github.com/Xiaofeng-life/ICDehazing. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 372,287 |
1903.12519 | A Provable Defense for Deep Residual Networks | We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100. Our approach is based on differentiable abstract interpretation and introduces two novel concepts: (i) abstract layers for fine-tuning the precision and scalability of the abstraction, (ii) a flexible domain specific language (DSL) for describing training objectives that combine abstract and concrete losses with arbitrary specifications. Our training method is implemented in the DiffAI system. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | true | 125,749 |
2306.04605 | Empowering Business Transformation: The Positive Impact and Ethical
Considerations of Generative AI in Software Product Management -- A
Systematic Literature Review | Generative Artificial Intelligence (GAI) has made outstanding strides in recent years, with a good-sized impact on software product management. Drawing on pertinent articles from 2016 to 2023, this systematic literature evaluation reveals generative AI's potential applications, benefits, and constraints in this area. The study shows that technology can assist in idea generation, market research, customer insights, product requirements engineering, and product development. It can help reduce development time and costs through automatic code generation, customer feedback analysis, and more. However, the technology's accuracy, reliability, and ethical consideration persist. Ultimately, generative AI's practical application can significantly improve software product management activities, leading to more efficient use of resources, better product outcomes, and improved end-user experiences. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 371,817 |
1811.04231 | Speech Intention Understanding in a Head-final Language: A
Disambiguation Utilizing Intonation-dependency | For a large portion of real-life utterances, the intention cannot be solely decided by either their semantic or syntactic characteristics. Although not all the sociolinguistic and pragmatic information can be digitized, at least phonetic features are indispensable in understanding the spoken language. Especially in head-final languages such as Korean, sentence-final prosody has great importance in identifying the speaker's intention. This paper suggests a system which identifies the inherent intention of a spoken utterance given its transcript, in some cases using auxiliary acoustic features. The main point here is a separate distinction for cases where discrimination of intention requires an acoustic cue. Thus, the proposed classification system decides whether the given utterance is a fragment, statement, question, command, or a rhetorical question/command, utilizing the intonation-dependency coming from the head-finality. Based on an intuitive understanding of the Korean language that is engaged in the data annotation, we construct a network which identifies the intention of a speech, and validate its utility with the test sentences. The system, if combined with up-to-date speech recognizers, is expected to be flexibly inserted into various language understanding modules. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 113,030 |
2207.10295 | Addressing Optimism Bias in Sequence Modeling for Reinforcement Learning | Impressive results in natural language processing (NLP) based on the Transformer neural network architecture have inspired researchers to explore viewing offline reinforcement learning (RL) as a generic sequence modeling problem. Recent works based on this paradigm have achieved state-of-the-art results in several of the mostly deterministic offline Atari and D4RL benchmarks. However, because these methods jointly model the states and actions as a single sequencing problem, they struggle to disentangle the effects of the policy and world dynamics on the return. Thus, in adversarial or stochastic environments, these methods lead to overly optimistic behavior that can be dangerous in safety-critical systems like autonomous driving. In this work, we propose a method that addresses this optimism bias by explicitly disentangling the policy and world models, which allows us at test time to search for policies that are robust to multiple possible futures in the environment. We demonstrate our method's superior performance on a variety of autonomous driving tasks in simulation. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 309,210 |
2305.17871 | propnet: Propagating 2D Annotation to 3D Segmentation for Gastric Tumors
on CT Scans | **Background:** Accurate 3D CT scan segmentation of gastric tumors is pivotal for diagnosis and treatment. The challenges lie in the irregular shapes, blurred boundaries of tumors, and the inefficiency of existing methods. **Purpose:** We conducted a study to introduce a model, utilizing human-guided knowledge and unique modules, to address the challenges of 3D tumor segmentation. **Methods:** We developed the PropNet framework, propagating radiologists' knowledge from 2D annotations to the entire 3D space. This model consists of a proposing stage for coarse segmentation and a refining stage for improved segmentation, using two-way branches for enhanced performance and an up-down strategy for efficiency. **Results:** With 98 patient scans for training and 30 for validation, our method achieves a significant agreement with manual annotation (Dice of 0.803) and improves efficiency. The performance is comparable in different scenarios and with various radiologists' annotations (Dice between 0.785 and 0.803). Moreover, the model shows improved prognostic prediction performance (C-index of 0.620 vs. 0.576) on an independent validation set of 42 patients with advanced gastric cancer. **Conclusions:** Our model generates accurate tumor segmentation efficiently and stably, improving prognostic performance and reducing high-throughput image reading workload. This model can accelerate the quantitative analysis of gastric tumors and enhance downstream task performance. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 368,779 |
1812.07611 | GP-CNAS: Convolutional Neural Network Architecture Search with Genetic
Programming | Convolutional neural networks (CNNs) are effective at solving difficult problems like visual recognition, speech recognition and natural language processing. However, performance gain comes at the cost of laborious trial-and-error in designing deeper CNN architectures. In this paper, a genetic programming (GP) framework for convolutional neural network architecture search, abbreviated as GP-CNAS, is proposed to automatically search for optimal CNN architectures. GP-CNAS encodes CNNs as trees where leaf nodes (GP terminals) are selected residual blocks and non-leaf nodes (GP functions) specify the block assembling procedure. Our tree-based representation enables easy design and flexible implementation of genetic operators. Specifically, we design a dynamic crossover operator that strikes a balance between exploration and exploitation, which emphasizes CNN complexity at early stage and CNN diversity at later stage. Therefore, the desired CNN architecture with balanced depth and width can be found within limited trials. Moreover, our GP-CNAS framework is highly compatible with other manually-designed and NAS-generated block types as well. Experimental results on the CIFAR-10 dataset show that GP-CNAS is competitive among the state-of-the-art automatic and semi-automatic NAS algorithms. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 116,843 |
2302.08287 | Unsupervised Evaluation of Out-of-distribution Detection: A Data-centric
Perspective | Out-of-distribution (OOD) detection methods assume that they have test ground truths, i.e., whether individual test samples are in-distribution (IND) or OOD. However, in the real world, we do not always have such ground truths, and thus do not know which sample is correctly detected and cannot compute the metric like AUROC to evaluate the performance of different OOD detection methods. In this paper, we are the first to introduce the unsupervised evaluation problem in OOD detection, which aims to evaluate OOD detection methods in real-world changing environments without OOD labels. We propose three methods to compute Gscore as an unsupervised indicator of OOD detection performance. We further introduce a new benchmark Gbench, which has 200 real-world OOD datasets of various label spaces to train and evaluate our method. Through experiments, we find a strong quantitative correlation betwwen Gscore and the OOD detection performance. Extensive experiments demonstrate that our Gscore achieves state-of-the-art performance. Gscore also generalizes well with different IND/OOD datasets, OOD detection methods, backbones and dataset sizes. We further provide interesting analyses of the effects of backbones and IND/OOD datasets on OOD detection performance. The data and code will be available. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 346,010 |
2012.14271 | Towards Fully Automated Manga Translation | We tackle the problem of machine translation of manga, Japanese comics. Manga translation involves two important problems in machine translation: context-aware and multimodal translation. Since text and images are mixed up in an unstructured fashion in Manga, obtaining context from the image is essential for manga translation. However, it is still an open problem how to extract context from image and integrate into MT models. In addition, corpus and benchmarks to train and evaluate such model is currently unavailable. In this paper, we make the following four contributions that establishes the foundation of manga translation research. First, we propose multimodal context-aware translation framework. We are the first to incorporate context information obtained from manga image. It enables us to translate texts in speech bubbles that cannot be translated without using context information (e.g., texts in other speech bubbles, gender of speakers, etc.). Second, for training the model, we propose the approach to automatic corpus construction from pairs of original manga and their translations, by which large parallel corpus can be constructed without any manual labeling. Third, we created a new benchmark to evaluate manga translation. Finally, on top of our proposed methods, we devised a first comprehensive system for fully automated manga translation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 213,459 |
2405.00632 | When Quantization Affects Confidence of Large Language Models? | Recent studies introduced effective compression techniques for Large Language Models (LLMs) via post-training quantization or low-bit weight representation. Although quantized weights offer storage efficiency and allow for faster inference, existing works have indicated that quantization might compromise performance and exacerbate biases in LLMs. This study investigates the confidence and calibration of quantized models, considering factors such as language model type and scale as contributors to quantization loss. Firstly, we reveal that quantization with GPTQ to 4-bit results in a decrease in confidence regarding true labels, with varying impacts observed among different language models. Secondly, we observe fluctuations in the impact on confidence across different scales. Finally, we propose an explanation for quantization loss based on confidence levels, indicating that quantization disproportionately affects samples where the full model exhibited low confidence levels in the first place. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 450,996 |
1612.04970 | Improving Neural Network Generalization by Combining Parallel Circuits
with Dropout | In an attempt to solve the lengthy training times of neural networks, we proposed Parallel Circuits (PCs), a biologically inspired architecture. Previous work has shown that this approach fails to maintain generalization performance in spite of achieving sharp speed gains. To address this issue, and motivated by the way Dropout prevents node co-adaption, in this paper, we suggest an improvement by extending Dropout to the PC architecture. The paper provides multiple insights into this combination, including a variety of fusion approaches. Experiments show promising results in which improved error rates are achieved in most cases, whilst maintaining the speed advantage of the PC approach. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 65,615 |
2311.02691 | Age of Information Analysis for CR-NOMA Aided Uplink Systems with
Randomly Arrived Packets | This paper studies the application of cognitive radio inspired non-orthogonal multiple access (CR-NOMA) to reduce age of information (AoI) for uplink transmission. In particular, a time division multiple access (TDMA) based legacy network is considered, where each user is allocated with a dedicated time slot to transmit its status update information. The CR-NOMA is implemented as an add-on to the TDMA legacy network, which enables each user to have more opportunities to transmit by sharing other user's time slots. A rigorous analytical framework is developed to obtain the expressions for AoIs achieved by CR-NOMA with and without re-transmission, by taking the randomness of the status update generating process into consideration. Numerical results are presented to verify the accuracy of the developed analysis. It is shown that the AoI can be significantly reduced by applying CR-NOMA compared to TDMA. Moreover, the use of re-transmission is helpful to reduce AoI, especially when the status arrival rate is low. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 405,542 |
2111.10544 | Towards Scalable Unpaired Virtual Try-On via Patch-Routed
Spatially-Adaptive GAN | Image-based virtual try-on is one of the most promising applications of human-centric image generation due to its tremendous real-world potential. Yet, as most try-on approaches fit in-shop garments onto a target person, they require the laborious and restrictive construction of a paired training dataset, severely limiting their scalability. While a few recent works attempt to transfer garments directly from one person to another, alleviating the need to collect paired datasets, their performance is impacted by the lack of paired (supervised) information. In particular, disentangling style and spatial information of the garment becomes a challenge, which existing methods either address by requiring auxiliary data or extensive online optimization procedures, thereby still inhibiting their scalability. To achieve a \emph{scalable} virtual try-on system that can transfer arbitrary garments between a source and a target person in an unsupervised manner, we thus propose a texture-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive GAN (PASTA-GAN), that facilitates real-world unpaired virtual try-on. Specifically, to disentangle the style and spatial information of each garment, PASTA-GAN consists of an innovative patch-routed disentanglement module for successfully retaining garment texture and shape characteristics. Guided by the source person keypoints, the patch-routed disentanglement module first decouples garments into normalized patches, thus eliminating the inherent spatial information of the garment, and then reconstructs the normalized patches to the warped garment complying with the target person pose. Given the warped garment, PASTA-GAN further introduces novel spatially-adaptive residual blocks that guide the generator to synthesize more realistic garment details. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 267,354 |
1504.02164 | Linearly Supporting Feature Extraction For Automated Estimation Of
Stellar Atmospheric Parameters | We describe a scheme to extract linearly supporting (LSU) features from stellar spectra to automatically estimate the atmospheric parameters $T_{eff}$, log$~g$, and [Fe/H]. "Linearly supporting" means that the atmospheric parameters can be accurately estimated from the extracted features through a linear model. The successive steps of the process are as follow: first, decompose the spectrum using a wavelet packet (WP) and represent it by the derived decomposition coefficients; second, detect representative spectral features from the decomposition coefficients using the proposed method Least Absolute Shrinkage and Selection Operator (LARS)$_{bs}$; third, estimate the atmospheric parameters $T_{eff}$, log$~g$, and [Fe/H] from the detected features using a linear regression method. One prominent characteristic of this scheme is its ability to evaluate quantitatively the contribution of each detected feature to the atmospheric parameter estimate and also to trace back the physical significance of that feature. This work also shows that the usefulness of a component depends on both wavelength and frequency. The proposed scheme has been evaluated on both real spectra from the Sloan Digital Sky Survey (SDSS)/SEGUE and synthetic spectra calculated from Kurucz's NEWODF models. On real spectra, we extracted 23 features to estimate $T_{eff}$, 62 features for log$~g$, and 68 features for [Fe/H]. Test consistencies between our estimates and those provided by the Spectroscopic Sarameter Pipeline of SDSS show that the mean absolute errors (MAEs) are 0.0062 dex for log$~T_{eff}$ (83 K for $T_{eff}$), 0.2345 dex for log$~g$, and 0.1564 dex for [Fe/H]. For the synthetic spectra, the MAE test accuracies are 0.0022 dex for log$~T_{eff}$ (32 K for $T_{eff}$), 0.0337 dex for log$~g$, and 0.0268 dex for [Fe/H]. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 41,888 |
2111.07753 | An Adaptive Framework for Reliable Trajectory Following in
Changing-Contact Robot Manipulation Tasks | We describe a framework for changing-contact robot manipulation tasks that require the robot to make and break contacts with objects and surfaces. The discontinuous interaction dynamics of such tasks make it difficult to construct and use a single dynamics model or control strategy, and the highly non-linear nature of the dynamics during contact changes can be damaging to the robot and the objects. We present an adaptive control framework that enables the robot to incrementally learn to predict contact changes in a changing contact task, learn the interaction dynamics of the piece-wise continuous system, and provide smooth and accurate trajectory tracking using a task-space variable impedance controller. We experimentally compare the performance of our framework against that of representative control methods to establish that the adaptive control and incremental learning components of our framework are needed to achieve smooth control in the presence of discontinuous dynamics in changing-contact robot manipulation tasks. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 266,470 |
2308.07462 | Playing with Words: Comparing the Vocabulary and Lexical Richness of
ChatGPT and Humans | The introduction of Artificial Intelligence (AI) generative language models such as GPT (Generative Pre-trained Transformer) and tools such as ChatGPT has triggered a revolution that can transform how text is generated. This has many implications, for example, as AI-generated text becomes a significant fraction of the text, would this have an effect on the language capabilities of readers and also on the training of newer AI tools? Would it affect the evolution of languages? Focusing on one specific aspect of the language: words; will the use of tools such as ChatGPT increase or reduce the vocabulary used or the lexical richness? This has implications for words, as those not included in AI-generated content will tend to be less and less popular and may eventually be lost. In this work, we perform an initial comparison of the vocabulary and lexical richness of ChatGPT and humans when performing the same tasks. In more detail, two datasets containing the answers to different types of questions answered by ChatGPT and humans, and a third dataset in which ChatGPT paraphrases sentences and questions are used. The analysis shows that ChatGPT tends to use fewer distinct words and lower lexical richness than humans. These results are very preliminary and additional datasets and ChatGPT configurations have to be evaluated to extract more general conclusions. Therefore, further research is needed to understand how the use of ChatGPT and more broadly generative AI tools will affect the vocabulary and lexical richness in different types of text and languages. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 385,521 |
2411.08766 | Mapping Methane -- The Impact of Dairy Farm Practices on Emissions
Through Satellite Data and Machine Learning | This study investigates the correlation between dairy farm characteristics and methane concentrations as derived from satellite observations in Eastern Canada. Utilizing data from 11 dairy farms collected between January 2020 and December 2022, we integrated Sentinel-5P satellite methane data with critical farm-level attributes, including herd genetics, feeding practices, and management strategies. Initial analyses revealed significant correlations with methane concentrations, leading to the application of Variance Inflation Factor (VIF) and Principal Component Analysis (PCA) to address multicollinearity and enhance model stability. Subsequently, machine learning models - specifically Random Forest and Neural Networks - were employed to evaluate feature importance and predict methane emissions. Our findings indicate a strong negative correlation between the Estimated Breeding Value (EBV) for protein percentage and methane concentrations, suggesting that genetic selection for higher milk protein content could be an effective strategy for emissions reduction. The integration of atmospheric transport models with satellite data further refined our emission estimates, significantly enhancing accuracy and spatial resolution. This research underscores the potential of advanced satellite monitoring, machine learning techniques, and atmospheric modeling in improving methane emission assessments within the dairy sector. It emphasizes the critical role of farm-specific characteristics in developing effective mitigation strategies. Future investigations should focus on expanding the dataset and incorporating inversion modeling for more precise emission quantification. Balancing ecological impacts with economic viability will be essential for fostering sustainable dairy farming practices. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 508,011 |
1512.03523 | Evolution of Privacy Loss in Wikipedia | The cumulative effect of collective online participation has an important and adverse impact on individual privacy. As an online system evolves over time, new digital traces of individual behavior may uncover previously hidden statistical links between an individual's past actions and her private traits. To quantify this effect, we analyze the evolution of individual privacy loss by studying the edit history of Wikipedia over 13 years, including more than 117,523 different users performing 188,805,088 edits. We trace each Wikipedia's contributor using apparently harmless features, such as the number of edits performed on predefined broad categories in a given time period (e.g. Mathematics, Culture or Nature). We show that even at this unspecific level of behavior description, it is possible to use off-the-shelf machine learning algorithms to uncover usually undisclosed personal traits, such as gender, religion or education. We provide empirical evidence that the prediction accuracy for almost all private traits consistently improves over time. Surprisingly, the prediction performance for users who stopped editing after a given time still improves. The activities performed by new users seem to have contributed more to this effect than additional activities from existing (but still active) users. Insights from this work should help users, system designers, and policy makers understand and make long-term design choices in online content creation systems. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 50,049 |
1707.05699 | Network analysis of Japanese global business using quasi-exhaustive
micro-data for Japanese overseas subsidiaries | Network analysis techniques remain rarely used for understanding international management strategies. Our paper highlights their value as research tool in this field of social science using a large set of micro-data (20,000) to investigate the presence of networks of subsidiaries overseas. The research question is the following: to what extent did/do global Japanese business networks mirror organizational models existing in Japan? In particular, we would like to assess how much the links building such business networks are shaped by the structure of big-size industrial conglomerates of firms headquartered in Japan, also described as HK. The major part of the academic community in the fields of management and industrial organization considers that formal links can be identified among firms belonging to HK. Miwa and Ramseyer (Miwa and Ramseyer 2002; Ramseyer 2006) challenge this claim and argue that the evidence supporting the existence of HK is weak. So far, quantitative empirical investigation has been conducted exclusively using data for firms incorporated in Japan. Our study tests the Miwa-Ramseyer hypothesis (MRH) at the global level using information on the network of Japanese subsidiaries overseas. The results obtained lead us to reject the MRH for the global dataset, as well as for subsets restricted to the two main regions/countries of destination of Japanese foreign investment. The results are robust to the weighting of the links, with different specifications, and are observed in most industrial sectors. The global Japanese network became increasingly complex during the late 20th century as a consequence of increase in the number of Japanese subsidiaries overseas but the key features of the structure remained rather stable. We draw implications of these findings for academic research in international business and for professionals involved in corporate strategy. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 77,279 |
2409.20043 | OPONeRF: One-Point-One NeRF for Robust Neural Rendering | In this paper, we propose a One-Point-One NeRF (OPONeRF) framework for robust scene rendering. Existing NeRFs are designed based on a key assumption that the target scene remains unchanged between the training and test time. However, small but unpredictable perturbations such as object movements, light changes and data contaminations broadly exist in real-life 3D scenes, which lead to significantly defective or failed rendering results even for the recent state-of-the-art generalizable methods. To address this, we propose a divide-and-conquer framework in OPONeRF that adaptively responds to local scene variations via personalizing appropriate point-wise parameters, instead of fitting a single set of NeRF parameters that are inactive to test-time unseen changes. Moreover, to explicitly capture the local uncertainty, we decompose the point representation into deterministic mapping and probabilistic inference. In this way, OPONeRF learns the sharable invariance and unsupervisedly models the unexpected scene variations between the training and testing scenes. To validate the effectiveness of the proposed method, we construct benchmarks from both realistic and synthetic data with diverse test-time perturbations including foreground motions, illumination variations and multi-modality noises, which are more challenging than conventional generalization and temporal reconstruction benchmarks. Experimental results show that our OPONeRF outperforms state-of-the-art NeRFs on various evaluation metrics through benchmark experiments and cross-scene evaluations. We further show the efficacy of the proposed method via experimenting on other existing generalization-based benchmarks and incorporating the idea of One-Point-One NeRF into other advanced baseline methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 492,972 |
2410.20916 | NeuGPT: Unified multi-modal Neural GPT | This paper introduces NeuGPT, a groundbreaking multi-modal language generation model designed to harmonize the fragmented landscape of neural recording research. Traditionally, studies in the field have been compartmentalized by signal type, with EEG, MEG, ECoG, SEEG, fMRI, and fNIRS data being analyzed in isolation. Recognizing the untapped potential for cross-pollination and the adaptability of neural signals across varying experimental conditions, we set out to develop a unified model capable of interfacing with multiple modalities. Drawing inspiration from the success of pre-trained large models in NLP, computer vision, and speech processing, NeuGPT is architected to process a diverse array of neural recordings and interact with speech and text data. Our model mainly focus on brain-to-text decoding, improving SOTA from 6.94 to 12.92 on BLEU-1 and 6.93 to 13.06 on ROUGE-1F. It can also simulate brain signals, thereby serving as a novel neural interface. Code is available at \href{https://github.com/NeuSpeech/NeuGPT}{NeuSpeech/NeuGPT (https://github.com/NeuSpeech/NeuGPT) .} | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 503,017 |
2001.03707 | Superconvergence of Online Optimization for Model Predictive Control | We develop a one-Newton-step-per-horizon, online, lag-$L$, model predictive control (MPC) algorithm for solving discrete-time, equality-constrained, nonlinear dynamic programs. Based on recent sensitivity analysis results for the target problems class, we prove that the approach exhibits a behavior that we call superconvergence; that is, the tracking error with respect to the full horizon solution is not only stable for successive horizon shifts, but also decreases with increasing shift order to a minimum value that decays exponentially in the length of the receding horizon. The key analytical step is the decomposition of the one-step error recursion of our algorithm into algorithmic error and perturbation error. We show that the perturbation error decays exponentially with the lag between two consecutive receding horizons, while~the algorithmic error, determined by Newton's method, achieves quadratic convergence instead. Overall this approach induces our local exponential convergence result in terms of the receding horizon length for suitable values of $L$. Numerical experiments validate our theoretical findings. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 160,038 |
2208.01340 | Parameterizing Kterm Hashing | Kterm Hashing provides an innovative approach to novelty detection on massive data streams. Previous research focused on maximizing the efficiency of Kterm Hashing and succeeded in scaling First Story Detection to Twitter-size data stream without sacrificing detection accuracy. In this paper, we focus on improving the effectiveness of Kterm Hashing. Traditionally, all kterms are considered as equally important when calculating a document's degree of novelty with respect to the past. We believe that certain kterms are more important than others and hypothesize that uniform kterm weights are sub-optimal for determining novelty in data streams. To validate our hypothesis, we parameterize Kterm Hashing by assigning weights to kterms based on their characteristics. Our experiments apply Kterm Hashing in a First Story Detection setting and reveal that parameterized Kterm Hashing can surpass state-of-the-art detection accuracy and significantly outperform the uniformly weighted approach. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 311,135 |
2412.05749 | A Comparative Study on Code Generation with Transformers | In an era of widespread influence of Natural Language Processing (NLP), there have been multiple research efforts to supplant traditional manual coding techniques with automated systems capable of generating solutions autonomously. With rapid research for code generation and a sole focus on large language models, there emerges a need to compare and evaluate the performance of transformer architectures based on several complexities of the model. This paper introduces the concept of a "A Comparative Study on Code Generation with Transformers," a model based on Transformer architecture, and NLP methodologies to automatically generate C++ source code for different varieties of problems. Here, a comparative study is performed to evaluate the robustness of transformer-based models on the basis of their architecture complexities and their capability to handle diverse problem sets, from basic arithmetic to complex computations. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 514,957 |
2106.15397 | Automated Evolutionary Approach for the Design of Composite Machine
Learning Pipelines | The effectiveness of the machine learning methods for real-world tasks depends on the proper structure of the modeling pipeline. The proposed approach is aimed to automate the design of composite machine learning pipelines, which is equivalent to computation workflows that consist of models and data operations. The approach combines key ideas of both automated machine learning and workflow management systems. It designs the pipelines with a customizable graph-based structure, analyzes the obtained results, and reproduces them. The evolutionary approach is used for the flexible identification of pipeline structure. The additional algorithms for sensitivity analysis, atomization, and hyperparameter tuning are implemented to improve the effectiveness of the approach. Also, the software implementation on this approach is presented as an open-source framework. The set of experiments is conducted for the different datasets and tasks (classification, regression, time series forecasting). The obtained results confirm the correctness and effectiveness of the proposed approach in the comparison with the state-of-the-art competitors and baseline solutions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 243,753 |
1505.01606 | A comparative study of approaches in user-centered health information
retrieval | In this paper, we survey various user-centered or context-based biomedical health information retrieval systems. We present and discuss the performance of systems submitted in CLEF eHealth 2014 Task 3 for this purpose. We classify and focus on comparing the two most prevalent retrieval models in biomedical information retrieval namely: Language Model (LM) and Vector Space Model (VSM). We also report on the effectiveness of using external medical resources and ontologies like MeSH, Metamap, UMLS, etc. We observed that the L.M. based retrieval systems outperform VSM based systems on various fronts. From the results we conclude that the state-of-art system scores for MAP was 0.4146, P@10 was 0.7560 and NDCG@10 was 0.7445, respectively. All of these score were reported by systems built on language modelling approaches. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 42,860 |
2404.11277 | Quantum-inspired Techniques in Tensor Networks for Industrial Contexts | In this paper we present a study of the applicability and feasibility of quantum-inspired algorithms and techniques in tensor networks for industrial environments and contexts, with a compilation of the available literature and an analysis of the use cases that may be affected by such methods. In addition, we explore the limitations of such techniques in order to determine their potential scalability. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 447,446 |
2012.13083 | Self-Supervised Representation Learning for Astronomical Images | Sky surveys are the largest data generators in astronomy, making automated tools for extracting meaningful scientific information an absolute necessity. We show that, without the need for labels, self-supervised learning recovers representations of sky survey images that are semantically useful for a variety of scientific tasks. These representations can be directly used as features, or fine-tuned, to outperform supervised methods trained only on labeled data. We apply a contrastive learning framework on multi-band galaxy photometry from the Sloan Digital Sky Survey (SDSS) to learn image representations. We then use them for galaxy morphology classification, and fine-tune them for photometric redshift estimation, using labels from the Galaxy Zoo 2 dataset and SDSS spectroscopy. In both downstream tasks, using the same learned representations, we outperform the supervised state-of-the-art results, and we show that our approach can achieve the accuracy of supervised models while using 2-4 times fewer labels for training. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 213,102 |
2201.12905 | Modularity-based Backbone Extraction in Weighted Complex Networks | The constantly growing size of real-world networks is a great challenge. Therefore, building a compact version of networks allowing their analyses is a must. Backbone extraction techniques are among the leading solutions to reduce network size while preserving its features. Coarse-graining merges similar nodes to reduce the network size, while filter-based methods remove nodes or edges according to a specific statistical property. Since community structure is ubiquitous in real-world networks, preserving it in the backbone extraction process is of prime interest. To this end, we propose a filter-based method. The so-called "modularity vitality backbone" removes nodes with the lower contribution to the network's modularity. Experimental results show that the proposed strategy outperforms the "overlapping nodes ego backbone" and the "overlapping nodes and hub backbone." These two backbone extraction processes recently introduced have proved their efficacy to preserve better the information of the original network than the popular disparity filter. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 277,828 |
1807.02504 | From Rank Estimation to Rank Approximation: Rank Residual Constraint for
Image Restoration | In this paper, we propose a novel approach to the rank minimization problem, termed rank residual constraint (RRC) model. Different from existing low-rank based approaches, such as the well-known nuclear norm minimization (NNM) and the weighted nuclear norm minimization (WNNM), which estimate the underlying low-rank matrix directly from the corrupted observations, we progressively approximate the underlying low-rank matrix via minimizing the rank residual. Through integrating the image nonlocal self-similarity (NSS) prior with the proposed RRC model, we apply it to image restoration tasks, including image denoising and image compression artifacts reduction. Towards this end, we first obtain a good reference of the original image groups by using the image NSS prior, and then the rank residual of the image groups between this reference and the degraded image is minimized to achieve a better estimate to the desired image. In this manner, both the reference and the estimated image are updated gradually and jointly in each iteration. Based on the group-based sparse representation model, we further provide a theoretical analysis on the feasibility of the proposed RRC model. Experimental results demonstrate that the proposed RRC model outperforms many state-of-the-art schemes in both the objective and perceptual quality. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 102,282 |
2304.05667 | Rail Detection: An Efficient Row-based Network and A New Benchmark | Rail detection, essential for railroad anomaly detection, aims to identify the railroad region in video frames. Although various studies on rail detection exist, neither an open benchmark nor a high-speed network is available in the community, making algorithm comparison and development difficult. Inspired by the growth of lane detection, we propose a rail database and a row-based rail detection method. In detail, we make several contributions: (i) We present a real-world railway dataset, Rail-DB, with 7432 pairs of images and annotations. The images are collected from different situations in lighting, road structures, and views. The rails are labeled with polylines, and the images are categorized into nine scenes. The Rail-DB is expected to facilitate the improvement of rail detection algorithms. (ii) We present an efficient row-based rail detection method, Rail-Net, containing a lightweight convolutional backbone and an anchor classifier. Specifically, we formulate the process of rail detection as a row-based selecting problem. This strategy reduces the computational cost compared to alternative segmentation methods. (iii) We evaluate the Rail-Net on Rail-DB with extensive experiments, including cross-scene settings and network backbones ranging from ResNet to Vision Transformers. Our method achieves promising performance in terms of both speed and accuracy. Notably, a lightweight version could achieve 92.77% accuracy and 312 frames per second. The Rail-Net outperforms the traditional method by 50.65% and the segmentation one by 5.86%. The database and code are available at: https://github.com/Sampson-Lee/Rail-Detection. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 357,709 |
2105.12225 | A risk analysis framework for real-time control systems | We present a Monte Carlo simulation framework for analysing the risk involved in deploying real-time control systems in safety-critical applications, as well as an algorithm design technique allowing one (in certain situations) to robustify a control algorithm. Both approaches are very general and agnostic to the initial control algorithm. We present examples showing that these techniques can be used to analyse the reliability of implementations of non-linear model predictive control algorithms. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 236,939 |
2410.17619 | From PDFs to Structured Data: Utilizing LLM Analysis in Sports Database
Management | This study investigates the effectiveness of Large Language Models (LLMs) in processing semi-structured data from PDF documents into structured formats, specifically examining their application in updating the Finnish Sports Clubs Database. Through action research methodology, we developed and evaluated an AI-assisted approach utilizing OpenAI's GPT-4 and Anthropic's Claude 3 Opus models to process data from 72 sports federation membership reports. The system achieved a 90% success rate in automated processing, successfully handling 65 of 72 files without errors and converting over 7,900 rows of data. While the initial development time was comparable to traditional manual processing (three months), the implemented system shows potential for reducing future processing time by approximately 90%. Key challenges included handling multilingual content, processing multi-page datasets, and managing extraneous information. The findings suggest that while LLMs demonstrate significant potential for automating semi-structured data processing tasks, optimal results are achieved through a hybrid approach combining AI automation with selective human oversight. This research contributes to the growing body of literature on practical LLM applications in organizational data management and provides insights into the transformation of traditional data processing workflows. | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 501,549 |
2002.07392 | Bit Error Rate Analysis of M-ARY PSK and M-ARY QAM Over Rician Fading
Channel | This paper mainly illustrates the Bit error rate performance of M-ary QAM and M-ary PSK for different values of SNR over Rician Fading channel. A signal experiences multipath propagation in the wireless communication system which causes expeditious signal amplitude fluctuations in time, is defined as fading. Rician Fading is a small signal fading. Rician fading is a hypothetical model for radio propagation inconsistency produced by fractional cancellation of a radio signal by itself and as a result the signal reaches in the receiver by several different paths. In this case, at least one of the destination paths is being lengthened or shortened. From this paper , it can be observed that the value of Bit error rate decreases when signal to noise ratio increases in decibel for Mary QAM and M-ary PSK such as 256 QAM, 64 PSK etc. Constellation diagrams of M-QAM and M-PSK have also been showed in this paper using MATLAB Simulation. The falling of Bit error rate with the increase of diversity order for a fixed value of SNR has also been included in this paper. Diversity is a influential receiver system which offers improvement over received signal strength. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 164,460 |
1407.1031 | The Shortest Path to Happiness: Recommending Beautiful, Quiet, and Happy
Routes in the City | When providing directions to a place, web and mobile mapping services are all able to suggest the shortest route. The goal of this work is to automatically suggest routes that are not only short but also emotionally pleasant. To quantify the extent to which urban locations are pleasant, we use data from a crowd-sourcing platform that shows two street scenes in London (out of hundreds), and a user votes on which one looks more beautiful, quiet, and happy. We consider votes from more than 3.3K individuals and translate them into quantitative measures of location perceptions. We arrange those locations into a graph upon which we learn pleasant routes. Based on a quantitative validation, we find that, compared to the shortest routes, the recommended ones add just a few extra walking minutes and are indeed perceived to be more beautiful, quiet, and happy. To test the generality of our approach, we consider Flickr metadata of more than 3.7M pictures in London and 1.3M in Boston, compute proxies for the crowdsourced beauty dimension (the one for which we have collected the most votes), and evaluate those proxies with 30 participants in London and 54 in Boston. These participants have not only rated our recommendations but have also carefully motivated their choices, providing insights for future work. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 34,386 |
2101.02703 | Distribution-Free, Risk-Controlling Prediction Sets | While improving prediction accuracy has been the focus of machine learning in recent years, this alone does not suffice for reliable decision-making. Deploying learning systems in consequential settings also requires calibrating and communicating the uncertainty of predictions. To convey instance-wise uncertainty for prediction tasks, we show how to generate set-valued predictions from a black-box predictor that control the expected loss on future test points at a user-specified level. Our approach provides explicit finite-sample guarantees for any dataset by using a holdout set to calibrate the size of the prediction sets. This framework enables simple, distribution-free, rigorous error control for many tasks, and we demonstrate it in five large-scale machine learning problems: (1) classification problems where some mistakes are more costly than others; (2) multi-label classification, where each observation has multiple associated labels; (3) classification problems where the labels have a hierarchical structure; (4) image segmentation, where we wish to predict a set of pixels containing an object of interest; and (5) protein structure prediction. Lastly, we discuss extensions to uncertainty quantification for ranking, metric learning and distributionally robust learning. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 214,714 |
1205.2618 | BPR: Bayesian Personalized Ranking from Implicit Feedback | Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive knearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 15,925 |
2203.10974 | Towards Self-Supervised Gaze Estimation | Recent joint embedding-based self-supervised methods have surpassed standard supervised approaches on various image recognition tasks such as image classification. These self-supervised methods aim at maximizing agreement between features extracted from two differently transformed views of the same image, which results in learning an invariant representation with respect to appearance and geometric image transformations. However, the effectiveness of these approaches remains unclear in the context of gaze estimation, a structured regression task that requires equivariance under geometric transformations (e.g., rotations, horizontal flip). In this work, we propose SwAT, an equivariant version of the online clustering-based self-supervised approach SwAV, to learn more informative representations for gaze estimation. We demonstrate that SwAT, with ResNet-50 and supported with uncurated unlabeled face images, outperforms state-of-the-art gaze estimation methods and supervised baselines in various experiments. In particular, we achieve up to 57% and 25% improvements in cross-dataset and within-dataset evaluation tasks on existing benchmarks (ETH-XGaze, Gaze360, and MPIIFaceGaze). | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 286,751 |
1701.01214 | A Review of Neural Network Based Machine Learning Approaches for Rotor
Angle Stability Control | This paper reviews the current status and challenges of Neural Networks (NNs) based machine learning approaches for modern power grid stability control including their design and implementation methodologies. NNs are widely accepted as Artificial Intelligence (AI) approaches offering an alternative way to control complex and ill-defined problems. In this paper various application of NNs for power system rotor angle stabilization and control problem is discussed. The main focus of this paper is on the use of Reinforcement Learning (RL) and Supervised Learning (SL) algorithms in power system wide-area control (WAC). Generally, these algorithms due to their capability in modeling nonlinearities and uncertainties are used for transient classification, neuro-control, wide-area monitoring and control, renewable energy management and control, and so on. The works of researchers in the field of conventional and renewable energy systems are reported and categorized. Paper concludes by presenting, comparing and evaluating various learning techniques and infrastructure configurations based on efficiency. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | true | false | false | 66,371 |
2112.02896 | Tunable Image Quality Control of 3-D Ultrasound using Switchable
CycleGAN | In contrast to 2-D ultrasound (US) for uniaxial plane imaging, a 3-D US imaging system can visualize a volume along three axial planes. This allows for a full view of the anatomy, which is useful for gynecological (GYN) and obstetrical (OB) applications. Unfortunately, the 3-D US has an inherent limitation in resolution compared to the 2-D US. In the case of 3-D US with a 3-D mechanical probe, for example, the image quality is comparable along the beam direction, but significant deterioration in image quality is often observed in the other two axial image planes. To address this, here we propose a novel unsupervised deep learning approach to improve 3-D US image quality. In particular, using {\em unmatched} high-quality 2-D US images as a reference, we trained a recently proposed switchable CycleGAN architecture so that every mapping plane in 3-D US can learn the image quality of 2-D US images. Thanks to the switchable architecture, our network can also provide real-time control of image enhancement level based on user preference, which is ideal for a user-centric scanner setup. Extensive experiments with clinical evaluation confirm that our method offers significantly improved image quality as well user-friendly flexibility. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 270,010 |
2109.00906 | An Automated Approach for the Recognition of Bengali License Plates | Automatic Number Plate Recognition (ALPR) is a system for automatically identifying the license plates of any vehicle. This process is important for tracking, ticketing, and any billing system, among other things. With the use of information and communication technology (ICT), all systems are being automated, including the vehicle tracking system. This study proposes a hybrid method for detecting license plates using characters from them. Our captured image information was used for the recognition procedure in Bangladeshi vehicles, which is the topic of this study. Here, for license plate detection, the YOLO model was used where 81% was correctly predicted. And then, for license plate segmentation, Otsu's Thresholding was used and eventually, for character recognition, the CNN model was applied. This model will allow the vehicle's automated license plate detection system to avoid any misuse. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 253,271 |
2204.03030 | Statistical Model Criticism of Variational Auto-Encoders | We propose a framework for the statistical evaluation of variational auto-encoders (VAEs) and test two instances of this framework in the context of modelling images of handwritten digits and a corpus of English text. Our take on evaluation is based on the idea of statistical model criticism, popular in Bayesian data analysis, whereby a statistical model is evaluated in terms of its ability to reproduce statistics of an unknown data generating process from which we can obtain samples. A VAE learns not one, but two joint distributions over a shared sample space, each exploiting a choice of factorisation that makes sampling tractable in one of two directions (latent-to-data, data-to-latent). We evaluate samples from these distributions, assessing their (marginal) fit to the observed data and our choice of prior, and we also evaluate samples through a pipeline that connects the two distributions starting from a data sample, assessing whether together they exploit and reveal latent factors of variation that are useful to a practitioner. We show that this methodology offers possibilities for model selection qualitatively beyond intrinsic evaluation metrics and at a finer granularity than commonly used statistics can offer. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 290,158 |
2112.05451 | Structure-Preserving Learning Using Gaussian Processes and Variational
Integrators | Gaussian process regression is increasingly applied for learning unknown dynamical systems. In particular, the implicit quantification of the uncertainty of the learned model makes it a promising approach for safety-critical applications. When using Gaussian process regression to learn unknown systems, a commonly considered approach consists of learning the residual dynamics after applying some generic discretization technique, which might however disregard properties of the underlying physical system. Variational integrators are a less common yet promising approach to discretization, as they retain physical properties of the underlying system, such as energy conservation and satisfaction of explicit kinematic constraints. In this work, we present a novel structure-preserving learning-based modelling approach that combines a variational integrator for the nominal dynamics of a mechanical system and learning residual dynamics with Gaussian process regression. We extend our approach to systems with known kinematic constraints and provide formal bounds on the prediction uncertainty. The simulative evaluation of the proposed method shows desirable energy conservation properties in accordance with general theoretical results and demonstrates exact constraint satisfaction for constrained dynamical systems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 270,847 |
2307.05853 | GLA-GCN: Global-local Adaptive Graph Convolutional Network for 3D Human
Pose Estimation from Monocular Video | 3D human pose estimation has been researched for decades with promising fruits. 3D human pose lifting is one of the promising research directions toward the task where both estimated pose and ground truth pose data are used for training. Existing pose lifting works mainly focus on improving the performance of estimated pose, but they usually underperform when testing on the ground truth pose data. We observe that the performance of the estimated pose can be easily improved by preparing good quality 2D pose, such as fine-tuning the 2D pose or using advanced 2D pose detectors. As such, we concentrate on improving the 3D human pose lifting via ground truth data for the future improvement of more quality estimated pose data. Towards this goal, a simple yet effective model called Global-local Adaptive Graph Convolutional Network (GLA-GCN) is proposed in this work. Our GLA-GCN globally models the spatiotemporal structure via a graph representation and backtraces local joint features for 3D human pose estimation via individually connected layers. To validate our model design, we conduct extensive experiments on three benchmark datasets: Human3.6M, HumanEva-I, and MPI-INF-3DHP. Experimental results show that our GLA-GCN implemented with ground truth 2D poses significantly outperforms state-of-the-art methods (e.g., up to around 3%, 17%, and 14% error reductions on Human3.6M, HumanEva-I, and MPI-INF-3DHP, respectively). GitHub: https://github.com/bruceyo/GLA-GCN. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 378,874 |
2010.11682 | Lung Nodule Classification Using Biomarkers, Volumetric Radiomics and 3D
CNNs | We present a hybrid algorithm to estimate lung nodule malignancy that combines imaging biomarkers from Radiologist's annotation with image classification of CT scans. Our algorithm employs a 3D Convolutional Neural Network (CNN) as well as a Random Forest in order to combine CT imagery with biomarker annotation and volumetric radiomic features. We analyze and compare the performance of the algorithm using only imagery, only biomarkers, combined imagery + biomarkers, combined imagery + volumetric radiomic features and finally the combination of imagery + biomarkers + volumetric features in order to classify the suspicion level of nodule malignancy. The National Cancer Institute (NCI) Lung Image Database Consortium (LIDC) IDRI dataset is used to train and evaluate the classification task. We show that the incorporation of semi-supervised learning by means of K-Nearest-Neighbors (KNN) can increase the available training sample size of the LIDC-IDRI thereby further improving the accuracy of malignancy estimation of most of the models tested although there is no significant improvement with the use of KNN semi-supervised learning if image classification with CNNs and volumetric features are combined with descriptive biomarkers. Unexpectedly, we also show that a model using image biomarkers alone is more accurate than one that combines biomarkers with volumetric radiomics, 3D CNNs, and semi-supervised learning. We discuss the possibility that this result may be influenced by cognitive bias in LIDC-IDRI because malignancy estimates were recorded by the same radiologist panel as biomarkers, as well as future work to incorporate pathology information over a subset of study participants. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 202,366 |
2106.02929 | The topological Dirac equation of networks and simplicial complexes | We define the topological Dirac equation describing the evolution of a topological wave function on networks or on simplicial complexes. On networks, the topological wave function describes the dynamics of topological signals or cochains, i.e. dynamical signals defined both on nodes and on links. On simplicial complexes the wave function is also defined on higher-dimensional simplices. Therefore the topological wave function satisfies a relaxed condition of locality as it acquires the same value along simplices of dimension larger than zero. The topological Dirac equation defines eigenstates whose dispersion relation is determined by the spectral properties of the Dirac (or chiral) operator defined on networks and generalized network structures including simplicial complexes and multiplex networks. On simplicial complexes the Dirac equation leads to multiple energy bands. On multiplex networks the topological Dirac equation can be generalized to distinguish between different mutlilinks leading to a natural definition of rotations of the topological spinor. The topological Dirac equation is here initially formulated on a spatial network or simplicial complex for describing the evolution of the topological wave function in continuous time. This framework is also extended to treat the topological Dirac equation on $1+d$ spaces describing a discrete space-time with one temporal dimension and $d$ spatial dimensions with $d\in \{1,2,3\}$. This work includes also the discussion of numerical results obtained by implementing the topological Dirac equation on simplicial complex models and on real simple and multiplex network data. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 239,093 |
2002.01379 | Combining 3D Model Contour Energy and Keypoints for Object Tracking | We present a new combined approach for monocular model-based 3D tracking. A preliminary object pose is estimated by using a keypoint-based technique. The pose is then refined by optimizing the contour energy function. The energy determines the degree of correspondence between the contour of the model projection and the image edges. It is calculated based on both the intensity and orientation of the raw image gradient. For optimization, we propose a technique and search area constraints that allow overcoming the local optima and taking into account information obtained through keypoint-based pose estimation. Owing to its combined nature, our method eliminates numerous issues of keypoint-based and edge-based approaches. We demonstrate the efficiency of our method by comparing it with state-of-the-art methods on a public benchmark dataset that includes videos with various lighting conditions, movement patterns, and speed. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 162,641 |
2209.07527 | Improved proteasomal cleavage prediction with positive-unlabeled
learning | Accurate in silico modeling of the antigen processing pathway is crucial to enable personalized epitope vaccine design for cancer. An important step of such pathway is the degradation of the vaccine into smaller peptides by the proteasome, some of which are going to be presented to T cells by the MHC complex. While predicting MHC-peptide presentation has received a lot of attention recently, proteasomal cleavage prediction remains a relatively unexplored area in light of recent advancesin high-throughput mass spectrometry-based MHC ligandomics. Moreover, as such experimental techniques do not allow to identify regions that cannot be cleaved, the latest predictors generate decoy negative samples and treat them as true negatives when training, even though some of them could actually be positives. In this work, we thus present a new predictor trained with an expanded dataset and the solid theoretical underpinning of positive-unlabeled learning, achieving a new state-of-the-art in proteasomal cleavage prediction. The improved predictive capabilities will in turn enable more precise vaccine development improving the efficacy of epitope-based vaccines. Pretrained models are available on GitHub | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 317,779 |
1712.04254 | Robustness, Evolvability and Phenotypic Complexity: Insights from
Evolving Digital Circuits | We show how the characteristics of the evolutionary algorithm influence the evolvability of candidate solutions, i.e. the propensity of evolving individuals to generate better solutions as a result of genetic variation. More specifically, (1+{\lambda}) evolutionary strategies largely outperform ({\mu}+1) evolutionary strategies in the context of the evolution of digital circuits --- a domain characterized by a high level of neutrality. This difference is due to the fact that the competition for robustness to mutations among the circuits evolved with ({\mu}+1) evolutionary strategies leads to the selection of phenotypically simple but low evolvable circuits. These circuits achieve robustness by minimizing the number of functional genes rather than by relying on redundancy or degeneracy to buffer the effects of mutations. The analysis of these factors enabled us to design a new evolutionary algorithm, named Parallel Stochastic Hill Climber (PSHC), which outperforms the other two methods considered. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 86,576 |
2203.08765 | Efficient conditioned face animation using frontally-viewed embedding | As the quality of few shot facial animation from landmarks increases, new applications become possible, such as ultra low bandwidth video chat compression with a high degree of realism. However, there are some important challenges to tackle in order to improve the experience in real world conditions. In particular, the current approaches fail to represent profile views without distortions, while running in a low compute regime. We focus on this key problem by introducing a multi-frames embedding dubbed Frontalizer to improve profile views rendering. In addition to this core improvement, we explore the learning of a latent code conditioning generations along with landmarks to better convey facial expressions. Our dense models achieves 22% of improvement in perceptual quality and 73% reduction of landmark error over the first order model baseline on a subset of DFDC videos containing head movements. Declined with mobile architectures, our models outperform the previous state-of-the-art (improving perceptual quality by more than 16% and reducing landmark error by more than 47% on two datasets) while running on real time on iPhone 8 with very low bandwidth requirements. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 285,904 |
2308.00890 | Tango: rethinking quantization for graph neural network training on GPUs | Graph Neural Networks (GNNs) are becoming increasingly popular due to their superior performance in critical graph-related tasks. While quantization is widely used to accelerate GNN computation, quantized training faces unprecedented challenges. Current quantized GNN training systems often have longer training times than their full-precision counterparts for two reasons: (i) addressing the accuracy challenge leads to excessive overhead, and (ii) the optimization potential exposed by quantization is not adequately leveraged. This paper introduces Tango which re-thinks quantization challenges and opportunities for graph neural network training on GPUs with three contributions: Firstly, we introduce efficient rules to maintain accuracy during quantized GNN training. Secondly, we design and implement quantization-aware primitives and inter-primitive optimizations that can speed up GNN training. Finally, we integrate Tango with the popular Deep Graph Library (DGL) system and demonstrate its superior performance over state-of-the-art approaches on various GNN models and datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 383,066 |
2304.13957 | Decentralized Inference via Capability Type Structures in Cooperative
Multi-Agent Systems | This work studies the problem of ad hoc teamwork in teams composed of agents with differing computational capabilities. We consider cooperative multi-player games in which each agent's policy is constrained by a private capability parameter, and agents with higher capabilities are able to simulate the behavior of agents with lower capabilities (but not vice-versa). To address this challenge, we propose an algorithm that maintains a belief over the other agents' capabilities and incorporates this belief into the planning process. Our primary innovation is a novel framework based on capability type structures, which ensures that the belief updates remain consistent and informative without constructing the infinite hierarchy of beliefs. We also extend our techniques to settings where the agents' observations are subject to noise. We provide examples of games in which deviations in capability between oblivious agents can lead to arbitrarily poor outcomes, and experimentally validate that our capability-aware algorithm avoids the anti-cooperative behavior of the naive approach in these toy settings as well as a more complex cooperative checkers environment. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | true | 360,771 |
1104.4426 | Phylogeny and geometry of languages from normalized Levenshtein distance | The idea that the distance among pairs of languages can be evaluated from lexical differences seems to have its roots in the work of the French explorer Dumont D'Urville. He collected comparative words lists of various languages during his voyages aboard the Astrolabe from 1826 to 1829 and, in his work about the geographical division of the Pacific, he proposed a method to measure the degree of relation between languages. The method used by the modern lexicostatistics, developed by Morris Swadesh in the 1950s, measures distances from the percentage of shared cognates, which are words with a common historical origin. The weak point of this method is that subjective judgment plays a relevant role. Recently, we have proposed a new automated method which is motivated by the analogy with genetics. The new approach avoids any subjectivity and results can be easily replicated by other scholars. The distance between two languages is defined by considering a renormalized Levenshtein distance between pair of words with the same meaning and averaging on the words contained in a list. The renormalization, which takes into account the length of the words, plays a crucial role, and no sensible results can be found without it. In this paper we give a short review of our automated method and we illustrate it by considering the cluster of Malagasy dialects. We show that it sheds new light on their kinship relation and also that it furnishes a lot of new information concerning the modalities of the settlement of Madagascar. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 10,089 |
1508.01192 | Mining for Causal Relationships: A Data-Driven Study of the Islamic
State | The Islamic State of Iraq and al-Sham (ISIS) is a dominant insurgent group operating in Iraq and Syria that rose to prominence when it took over Mosul in June, 2014. In this paper, we present a data-driven approach to analyzing this group using a dataset consisting of 2200 incidents of military activity surrounding ISIS and the forces that oppose it (including Iraqi, Syrian, and the American-led coalition). We combine ideas from logic programming and causal reasoning to mine for association rules for which we present evidence of causality. We present relationships that link ISIS vehicle-bourne improvised explosive device (VBIED) activity in Syria with military operations in Iraq, coalition air strikes, and ISIS IED activity, as well as rules that may serve as indicators of spikes in indirect fire, suicide attacks, and arrests. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 45,766 |
2209.12177 | Application of Deep Learning in Generating Structured Radiology Reports:
A Transformer-Based Technique | Since radiology reports needed for clinical practice and research are written and stored in free-text narrations, extraction of relative information for further analysis is difficult. In these circumstances, natural language processing (NLP) techniques can facilitate automatic information extraction and transformation of free-text formats to structured data. In recent years, deep learning (DL)-based models have been adapted for NLP experiments with promising results. Despite the significant potential of DL models based on artificial neural networks (ANN) and convolutional neural networks (CNN), the models face some limitations to implement in clinical practice. Transformers, another new DL architecture, have been increasingly applied to improve the process. Therefore, in this study, we propose a transformer-based fine-grained named entity recognition (NER) architecture for clinical information extraction. We collected 88 abdominopelvic sonography reports in free-text formats and annotated them based on our developed information schema. The text-to-text transfer transformer model (T5) and Scifive, a pre-trained domain-specific adaptation of the T5 model, were applied for fine-tuning to extract entities and relations and transform the input into a structured format. Our transformer-based model in this study outperformed previously applied approaches such as ANN and CNN models based on ROUGE-1, ROUGE-2, ROUGE-L, and BLEU scores of 0.816, 0.668, 0.528, and 0.743, respectively, while providing an interpretable structured report. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 319,445 |
2411.05557 | A Nerf-Based Color Consistency Method for Remote Sensing Images | Due to different seasons, illumination, and atmospheric conditions, the photometric of the acquired image varies greatly, which leads to obvious stitching seams at the edges of the mosaic image. Traditional methods can be divided into two categories, one is absolute radiation correction and the other is relative radiation normalization. We propose a NeRF-based method of color consistency correction for multi-view images, which weaves image features together using implicit expressions, and then re-illuminates feature space to generate a fusion image with a new perspective. We chose Superview-1 satellite images and UAV images with large range and time difference for the experiment. Experimental results show that the synthesize image generated by our method has excellent visual effect and smooth color transition at the edges. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 506,709 |
2306.16133 | Training Deep Surrogate Models with Large Scale Online Learning | The spatiotemporal resolution of Partial Differential Equations (PDEs) plays important roles in the mathematical description of the world's physical phenomena. In general, scientists and engineers solve PDEs numerically by the use of computationally demanding solvers. Recently, deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs. Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training. This paper advocates that relying on a traditional static dataset to train these models does not allow the full benefit of the solver to be used as a data generator. It proposes an open source online training framework for deep surrogate models. The framework implements several levels of parallelism focused on simultaneously generating numerical simulations and training deep neural networks. This approach suppresses the I/O and storage bottleneck associated with disk-loaded datasets, and opens the way to training on significantly larger datasets. Experiments compare the offline and online training of four surrogate models, including state-of-the-art architectures. Results indicate that exposing deep surrogate models to more dataset diversity, up to hundreds of GB, can increase model generalization capabilities. Fully connected neural networks, Fourier Neural Operator (FNO), and Message Passing PDE Solver prediction accuracy is improved by 68%, 16% and 7%, respectively. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 376,292 |
2408.12634 | Joint Hypergraph Rewiring and Memory-Augmented Forecasting Techniques in
Digital Twin Technology | Digital Twin technology creates virtual replicas of physical objects, processes, or systems by replicating their properties, data, and behaviors. This advanced technology offers a range of intelligent functionalities, such as modeling, simulation, and data-driven decision-making, that facilitate design optimization, performance estimation, and monitoring operations. Forecasting plays a pivotal role in Digital Twin technology, as it enables the prediction of future outcomes, supports informed decision-making, minimizes risks, driving improvements in efficiency, productivity, and cost reduction. Recently, Digital Twin technology has leveraged Graph forecasting techniques in large-scale complex sensor networks to enable accurate forecasting and simulation of diverse scenarios, fostering proactive and data-driven decision making. However, existing Graph forecasting techniques lack scalability for many real-world applications. They have limited ability to adapt to non-stationary environments, retain past knowledge, lack a mechanism to capture the higher order spatio-temporal dynamics, and estimate uncertainty in model predictions. To surmount the challenges, we introduce a hybrid architecture that enhances the hypergraph representation learning backbone by incorporating fast adaptation to new patterns and memory-based retrieval of past knowledge. This balance aims to improve the slowly-learned backbone and achieve better performance in adapting to recent changes. In addition, it models the time-varying uncertainty of multi-horizon forecasts, providing estimates of prediction uncertainty. Our forecasting architecture has been validated through ablation studies and has demonstrated promising results across multiple benchmark datasets, surpassing state-ofthe-art forecasting methods by a significant margin. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 482,821 |
2309.02320 | SeisCLIP: A seismology foundation model pre-trained by multi-modal data
for multi-purpose seismic feature extraction | Training specific deep learning models for particular tasks is common across various domains within seismology. However, this approach encounters two limitations: inadequate labeled data for certain tasks and limited generalization across regions. To address these challenges, we develop SeisCLIP, a seismology foundation model trained through contrastive learning from multi-modal data. It consists of a transformer encoder for extracting crucial features from time-frequency seismic spectrum and an MLP encoder for integrating the phase and source information of the same event. These encoders are jointly pre-trained on a vast dataset and the spectrum encoder is subsequently fine-tuned on smaller datasets for various downstream tasks. Notably, SeisCLIP's performance surpasses that of baseline methods in event classification, localization, and focal mechanism analysis tasks, employing distinct datasets from different regions. In conclusion, SeisCLIP holds significant potential as a foundational model in the field of seismology, paving the way for innovative directions in foundation-model-based seismology research. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 390,006 |
2312.05281 | X2-Softmax: Margin Adaptive Loss Function for Face Recognition | Learning the discriminative features of different faces is an important task in face recognition. By extracting face features in neural networks, it becomes easy to measure the similarity of different face images, which makes face recognition possible. To enhance the neural network's face feature separability, incorporating an angular margin during training is common practice. State-of-the-art loss functions CosFace and ArcFace apply fixed margins between weights of classes to enhance the inter-class separation of face features. Since the distribution of samples in the training set is imbalanced, similarities between different identities are unequal. Therefore, using an inappropriately fixed angular margin may lead to the problem that the model is difficult to converge or the face features are not discriminative enough. It is more in line with our intuition that the margins are angular adaptive, which could increase with the angles between classes growing. In this paper, we propose a new angular margin loss named X2-Softmax. X2-Softmax loss has adaptive angular margins, which provide the margin that increases with the angle between different classes growing. The angular adaptive margin ensures model flexibility and effectively improves the effect of face recognition. We have trained the neural network with X2-Softmax loss on the MS1Mv3 dataset and tested it on several evaluation benchmarks to demonstrate the effectiveness and superiority of our loss function. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 414,019 |
1903.00820 | Robot-to-Robot Relative Pose Estimation using Humans as Markers | In this paper, we propose a method to determine the 3D relative pose of pairs of communicating robots by using human pose-based key-points as correspondences. We adopt a 'leader-follower' framework, where at first, the leader robot visually detects and triangulates the key-points using the state-of-the-art pose detector named OpenPose. Afterward, the follower robots match the corresponding 2D projections on their respective calibrated cameras and find their relative poses by solving the perspective-n-point (PnP) problem. In the proposed method, we design an efficient person re-identification technique for associating the mutually visible humans in the scene. Additionally, we present an iterative optimization algorithm to refine the associated key-points based on their local structural properties in the image space. We demonstrate that these refinement processes are essential to establish accurate key-point correspondences across viewpoints. Furthermore, we evaluate the performance of the proposed relative pose estimation system through several experiments conducted in terrestrial and underwater environments. Finally, we discuss the relevant operational challenges of this approach and analyze its feasibility for multi-robot cooperative systems in human-dominated social settings and feature-deprived environments such as underwater. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 123,109 |
1508.00140 | Resilient Backhaul Network Design Using Hybrid Radio/Free-Space Optical
Technology | The radio-frequency (RF) technology is a scalable solution for the backhaul planning. However, its performance is limited in terms of data rate and latency. Free Space Optical (FSO) backhaul, on the other hand, offers a higher data rate but is sensitive to weather conditions. To combine the advantages of RF and FSO backhauls, this paper proposes a cost-efficient backhaul network using the hybrid RF/FSO technology. To ensure a resilient backhaul, the paper imposes a given degree of redundancy by connecting each node through $K$ link-disjoint paths so as to cope with potential link failures. Hence, the network planning problem considered in this paper is the one of minimizing the total deployment cost by choosing the appropriate link type, i.e., either hybrid RF/FSO or optical fiber (OF), between each couple of base-stations while guaranteeing $K$ link-disjoint connections, a data rate target, and a reliability threshold. The paper solves the problem using graph theory techniques. It reformulates the problem as a maximum weight clique problem in the planning graph, under a specified realistic assumption about the cost of OF and hybrid RF/FSO links. Simulation results show the cost of the different planning and suggest that the proposed heuristic solution has a close-to-optimal performance for a significant gain in computation complexity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 45,636 |
2501.18056 | RL-based Query Rewriting with Distilled LLM for online E-Commerce
Systems | Query rewriting (QR) is a critical technique in e-commerce search, addressing the lexical gap between user queries and product descriptions to enhance search performance. Existing QR approaches typically fall into two categories: discriminative models and generative methods leveraging large language models (LLMs). Discriminative models often struggle with natural language understanding and offer limited flexibility in rewriting, while generative LLMs, despite producing high-quality rewrites, face high inference latency and cost in online settings. These limitations force offline deployment, making them vulnerable to issues like information staleness and semantic drift. To overcome these challenges, we propose a novel hybrid pipeline for QR that balances efficiency and effectiveness. Our approach combines offline knowledge distillation to create a lightweight but efficient student model with online reinforcement learning (RL) to refine query rewriting dynamically using real-time feedback. A key innovation is the use of LLMs as simulated human feedback, enabling scalable reward signals and cost-effective evaluation without manual annotations. Experimental results on Amazon ESCI dataset demonstrate significant improvements in query relevance, diversity, and adaptability, as well as positive feedback from the LLM simulation. This work contributes to advancing LLM capabilities for domain-specific applications, offering a robust solution for dynamic and complex e-commerce search environments. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 528,547 |
1709.01231 | Discriminative Similarity for Clustering and Semi-Supervised Learning | Similarity-based clustering and semi-supervised learning methods separate the data into clusters or classes according to the pairwise similarity between the data, and the pairwise similarity is crucial for their performance. In this paper, we propose a novel discriminative similarity learning framework which learns discriminative similarity for either data clustering or semi-supervised learning. The proposed framework learns classifier from each hypothetical labeling, and searches for the optimal labeling by minimizing the generalization error of the learned classifiers associated with the hypothetical labeling. Kernel classifier is employed in our framework. By generalization analysis via Rademacher complexity, the generalization error bound for the kernel classifier learned from hypothetical labeling is expressed as the sum of pairwise similarity between the data from different classes, parameterized by the weights of the kernel classifier. Such pairwise similarity serves as the discriminative similarity for the purpose of clustering and semi-supervised learning, and discriminative similarity with similar form can also be induced by the integrated squared error bound for kernel density classification. Based on the discriminative similarity induced by the kernel classifier, we propose new clustering and semi-supervised learning methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 80,047 |
2403.05416 | SIRST-5K: Exploring Massive Negatives Synthesis with Self-supervised
Learning for Robust Infrared Small Target Detection | Single-frame infrared small target (SIRST) detection aims to recognize small targets from clutter backgrounds. Recently, convolutional neural networks have achieved significant advantages in general object detection. With the development of Transformer, the scale of SIRST models is constantly increasing. Due to the limited training samples, performance has not been improved accordingly. The quality, quantity, and diversity of the infrared dataset are critical to the detection of small targets. To highlight this issue, we propose a negative sample augmentation method in this paper. Specifically, a negative augmentation approach is proposed to generate massive negatives for self-supervised learning. Firstly, we perform a sequential noise modeling technology to generate realistic infrared data. Secondly, we fuse the extracted noise with the original data to facilitate diversity and fidelity in the generated data. Lastly, we proposed a negative augmentation strategy to enrich diversity as well as maintain semantic invariance. The proposed algorithm produces a synthetic SIRST-5K dataset, which contains massive pseudo-data and corresponding labels. With a rich diversity of infrared small target data, our algorithm significantly improves the model performance and convergence speed. Compared with other state-of-the-art (SOTA) methods, our method achieves outstanding performance in terms of probability of detection (Pd), false-alarm rate (Fa), and intersection over union (IoU). | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 436,000 |
2111.12218 | Flexible Pattern Discovery and Analysis | Based on the analysis of the proportion of utility in the supporting transactions used in the field of data mining, high utility-occupancy pattern mining (HUOPM) has recently attracted widespread attention. Unlike high-utility pattern mining (HUPM), which involves the enumeration of high-utility (e.g., profitable) patterns, HUOPM aims to find patterns representing a collection of existing transactions. In practical applications, however, not all patterns are used or valuable. For example, a pattern might contain too many items, that is, the pattern might be too specific and therefore lack value for users in real life. To achieve qualified patterns with a flexible length, we constrain the minimum and maximum lengths during the mining process and introduce a novel algorithm for the mining of flexible high utility-occupancy patterns. Our algorithm is referred to as HUOPM+. To ensure the flexibility of the patterns and tighten the upper bound of the utility-occupancy, a strategy called the length upper-bound (LUB) is presented to prune the search space. In addition, a utility-occupancy nested list (UO-nlist) and a frequency-utility-occupancy table (FUO-table) are employed to avoid multiple scans of the database. Evaluation results of the subsequent experiments confirm that the proposed algorithm can effectively control the length of the derived patterns, for both real-world and synthetic datasets. Moreover, it can decrease the execution time and memory consumption. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | false | 267,902 |
1907.10839 | Hard-Aware Fashion Attribute Classification | Fashion attribute classification is of great importance to many high-level tasks such as fashion item search, fashion trend analysis, fashion recommendation, etc. The task is challenging due to the extremely imbalanced data distribution, particularly the attributes with only a few positive samples. In this paper, we introduce a hard-aware pipeline to make full use of "hard" samples/attributes. We first propose Hard-Aware BackPropagation (HABP) to efficiently and adaptively focus on training "hard" data. Then for the identified hard labels, we propose to synthesize more complementary samples for training. To stabilize training, we extend semi-supervised GAN by directly deactivating outputs for synthetic complementary samples (Deact). In general, our method is more effective in addressing "hard" cases. HABP weights more on "hard" samples. For "hard" attributes with insufficient training data, Deact brings more stable synthetic samples for training and further improve the performance. Our method is verified on large scale fashion dataset, outperforming other state-of-the-art without any additional supervisions. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 139,717 |
1910.11818 | Real-time Memory Efficient Large-pose Face Alignment via Deep
Evolutionary Network | There is an urgent need to apply face alignment in a memory-efficient and real-time manner due to the recent explosion of face recognition applications. However, impact factors such as large pose variation and computational inefficiency, still hinder its broad implementation. To this end, we propose a computationally efficient deep evolutionary model integrated with 3D Diffusion Heap Maps (DHM). First, we introduce a sparse 3D DHM to assist the initial modeling process under extreme pose conditions. Afterward, a simple and effective CNN feature is extracted and fed to Recurrent Neural Network (RNN) for evolutionary learning. To accelerate the model, we propose an efficient network structure to accelerate the evolutionary learning process through a factorization strategy. Extensive experiments on three popular alignment databases demonstrate the advantage of the proposed models over the state-of-the-art, especially under large-pose conditions. Notably, the computational speed of our model is 6 times faster than the state-of-the-art on CPU and 14 times on GPU. We also discuss and analyze the limitations of our models and future research work. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 150,887 |
2206.06367 | Does a Technique for Building Multimodal Representation Matter? --
Comparative Analysis | Creating a meaningful representation by fusing single modalities (e.g., text, images, or audio) is the core concept of multimodal learning. Although several techniques for building multimodal representations have been proven successful, they have not been compared yet. Therefore it has been ambiguous which technique can be expected to yield the best results in a given scenario and what factors should be considered while choosing such a technique. This paper explores the most common techniques for building multimodal data representations -- the late fusion, the early fusion, and the sketch, and compares them in classification tasks. Experiments are conducted on three datasets: Amazon Reviews, MovieLens25M, and MovieLens1M datasets. In general, our results confirm that multimodal representations are able to boost the performance of unimodal models from 0.919 to 0.969 of accuracy on Amazon Reviews and 0.907 to 0.918 of AUC on MovieLens25M. However, experiments on both MovieLens datasets indicate the importance of the meaningful input data to the given task. In this article, we show that the choice of the technique for building multimodal representation is crucial to obtain the highest possible model's performance, that comes with the proper modalities combination. Such choice relies on: the influence that each modality has on the analyzed machine learning (ML) problem; the type of the ML task; the memory constraints while training and predicting phase. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 302,351 |
2405.15608 | Design and fabrication of autonomous electronic lablets for chemical
control | Lablets are autonomous microscopic particles with programmable CMOS electronics that canvcontrol electrokinetic phenomena and electrochemical reactions in solution via actuator and sensor microelectrodes. The lablets are designed to be rechargeable using an integrated supercapacitor, and to allow docking to one another or to a smart surface for interchange of energy, electronic information and chemicals. In this paper, we describe the design and fabrication of singulated lablets (CMOS2) at the scale of 100 by 200 {\mu}m, with the supercap adjacent to the functional lablet and occupying half the space. In other works, we have characterized the supercap and described the electronic design and proven functionality using arrays of these lablets. Here we present fabrication details for integrating functional coatings and the supercap and demonstrate electronic functionality of the lablets following singulation. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 457,016 |
2008.09427 | Combining Control Barrier Functions and Behavior Trees for Multi-Agent
Underwater Coverage Missions | Robot missions typically involve a number of desired objectives, such as avoiding collisions, staying connected to other robots, gathering information using sensors and returning to the charging station before the battery runs out. Some of these objectives need to be taken into account at the same time, such as avoiding collisions and staying connected, while others are focused upon during different parts of the executions, such as returning to the charging station and connectivity maintenance. In this paper, we show how Control Barrier Functions(CBFs) and Behavior Trees(BTs) can be combined in a principled manner to achieve both types of task compositions, with performance guarantees in terms of mission completion. We illustrate our method with a simulated underwater coverage mission. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 192,713 |
2305.12433 | ParticleWNN: a Novel Neural Networks Framework for Solving Partial
Differential Equations | Deep neural networks (DNNs) have been widely used to solve partial differential equations (PDEs) in recent years. In this work, a novel deep learning-based framework named Particle Weak-form based Neural Networks (ParticleWNN) is developed for solving PDEs in the weak form. In this framework, the trial space is defined as the space of DNNs, while the test space consists of functions compactly supported in extremely small regions, centered around particles. To facilitate the training of neural networks, an R-adaptive strategy is designed to adaptively modify the radius of regions during training. The ParticleWNN inherits the benefits of weak/variational formulation, requiring less regularity of the solution and a small number of quadrature points for computing integrals. Additionally, due to the special construction of the test functions, ParticleWNN enables parallel implementation and integral calculations only in extremely small regions. This framework is particularly desirable for solving problems with high-dimensional and complex domains. The efficiency and accuracy of ParticleWNN are demonstrated through several numerical examples, showcasing its superiority over state-of-the-art methods. The source code for the numerical examples presented in this paper is available at https://github.com/yaohua32/ParticleWNN. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 366,001 |
1807.08280 | Multi-scale Alignment and Contextual History for Attention Mechanism in
Sequence-to-sequence Model | A sequence-to-sequence model is a neural network module for mapping two sequences of different lengths. The sequence-to-sequence model has three core modules: encoder, decoder, and attention. Attention is the bridge that connects the encoder and decoder modules and improves model performance in many tasks. In this paper, we propose two ideas to improve sequence-to-sequence model performance by enhancing the attention module. First, we maintain the history of the location and the expected context from several previous time-steps. Second, we apply multiscale convolution from several previous attention vectors to the current decoder state. We utilized our proposed framework for sequence-to-sequence speech recognition and text-to-speech systems. The results reveal that our proposed extension could improve performance significantly compared to a standard attention baseline. | false | false | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 103,504 |
2302.06008 | ASR Bundestag: A Large-Scale political debate dataset in German | We present ASR Bundestag, a dataset for automatic speech recognition in German, consisting of 610 hours of aligned audio-transcript pairs for supervised training as well as 1,038 hours of unlabeled audio snippets for self-supervised learning, based on raw audio data and transcriptions from plenary sessions and committee meetings of the German parliament. In addition, we discuss utilized approaches for the automated creation of speech datasets and assess the quality of the resulting dataset based on evaluations and finetuning of a pre-trained state of the art model. We make the dataset publicly available, including all subsets. | false | false | true | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 345,253 |
1312.3970 | An Extensive Evaluation of Filtering Misclassified Instances in
Supervised Classification Tasks | Removing or filtering outliers and mislabeled instances prior to training a learning algorithm has been shown to increase classification accuracy. A popular approach for handling outliers and mislabeled instances is to remove any instance that is misclassified by a learning algorithm. However, an examination of which learning algorithms to use for filtering as well as their effects on multiple learning algorithms over a large set of data sets has not been done. Previous work has generally been limited due to the large computational requirements to run such an experiment, and, thus, the examination has generally been limited to learning algorithms that are computationally inexpensive and using a small number of data sets. In this paper, we examine 9 learning algorithms as filtering algorithms as well as examining the effects of filtering in the 9 chosen learning algorithms on a set of 54 data sets. In addition to using each learning algorithm individually as a filter, we also use the set of learning algorithms as an ensemble filter and use an adaptive algorithm that selects a subset of the learning algorithms for filtering for a specific task and learning algorithm. We find that for most cases, using an ensemble of learning algorithms for filtering produces the greatest increase in classification accuracy. We also compare filtering with a majority voting ensemble. The voting ensemble significantly outperforms filtering unless there are high amounts of noise present in the data set. Additionally, we find that a majority voting ensemble is robust to noise as filtering with a voting ensemble does not increase the classification accuracy of the voting ensemble. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 29,083 |
2501.06137 | Supervision policies can shape long-term risk management in
general-purpose AI models | The rapid proliferation and deployment of General-Purpose AI (GPAI) models, including large language models (LLMs), present unprecedented challenges for AI supervisory entities. We hypothesize that these entities will need to navigate an emergent ecosystem of risk and incident reporting, likely to exceed their supervision capacity. To investigate this, we develop a simulation framework parameterized by features extracted from the diverse landscape of risk, incident, or hazard reporting ecosystems, including community-driven platforms, crowdsourcing initiatives, and expert assessments. We evaluate four supervision policies: non-prioritized (first-come, first-served), random selection, priority-based (addressing the highest-priority risks first), and diversity-prioritized (balancing high-priority risks with comprehensive coverage across risk types). Our results indicate that while priority-based and diversity-prioritized policies are more effective at mitigating high-impact risks, particularly those identified by experts, they may inadvertently neglect systemic issues reported by the broader community. This oversight can create feedback loops that amplify certain types of reporting while discouraging others, leading to a skewed perception of the overall risk landscape. We validate our simulation results with several real-world datasets, including one with over a million ChatGPT interactions, of which more than 150,000 conversations were identified as risky. This validation underscores the complex trade-offs inherent in AI risk supervision and highlights how the choice of risk management policies can shape the future landscape of AI risks across diverse GPAI models used in society. | false | false | false | true | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 523,851 |
1911.06470 | Self-supervised Adversarial Training | Recent work has demonstrated that neural networks are vulnerable to adversarial examples. To escape from the predicament, many works try to harden the model in various ways, in which adversarial training is an effective way which learns robust feature representation so as to resist adversarial attacks. Meanwhile, the self-supervised learning aims to learn robust and semantic embedding from data itself. With these views, we introduce self-supervised learning to against adversarial examples in this paper. Specifically, the self-supervised representation coupled with k-Nearest Neighbour is proposed for classification. To further strengthen the defense ability, self-supervised adversarial training is proposed, which maximizes the mutual information between the representations of original examples and the corresponding adversarial examples. Experimental results show that the self-supervised representation outperforms its supervised version in respect of robustness and self-supervised adversarial training can further improve the defense ability efficiently. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 153,546 |
2210.13992 | SphNet: A Spherical Network for Semantic Pointcloud Segmentation | Semantic segmentation for robotic systems can enable a wide range of applications, from self-driving cars and augmented reality systems to domestic robots. We argue that a spherical representation is a natural one for egocentric pointclouds. Thus, in this work, we present a novel framework exploiting such a representation of LiDAR pointclouds for the task of semantic segmentation. Our approach is based on a spherical convolutional neural network that can seamlessly handle observations from various sensor systems (e.g., different LiDAR systems) and provides an accurate segmentation of the environment. We operate in two distinct stages: First, we encode the projected input pointclouds to spherical features. Second, we decode and back-project the spherical features to achieve an accurate semantic segmentation of the pointcloud. We evaluate our method with respect to state-of-the-art projection-based semantic segmentation approaches using well-known public datasets. We demonstrate that the spherical representation enables us to provide more accurate segmentation and to have a better generalization to sensors with different field-of-view and number of beams than what was seen during training. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 326,390 |
1712.10011 | The Merits of Sharing a Ride | The culture of sharing instead of ownership is sharply increasing in individuals behaviors. Particularly in transportation, concepts of sharing a ride in either carpooling or ridesharing have been recently adopted. An efficient optimization approach to match passengers in real-time is the core of any ridesharing system. In this paper, we model ridesharing as an online matching problem on general graphs such that passengers do not drive private cars and use shared taxis. We propose an optimization algorithm to solve it. The outlined algorithm calculates the optimal waiting time when a passenger arrives. This leads to a matching with minimal overall overheads while maximizing the number of partnerships. To evaluate the behavior of our algorithm, we used NYC taxi real-life data set. Results represent a substantial reduction in overall overheads. | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 87,434 |
2110.14309 | Inferring the Class Conditional Response Map for Weakly Supervised
Semantic Segmentation | Image-level weakly supervised semantic segmentation (WSSS) relies on class activation maps (CAMs) for pseudo labels generation. As CAMs only highlight the most discriminative regions of objects, the generated pseudo labels are usually unsatisfactory to serve directly as supervision. To solve this, most existing approaches follow a multi-training pipeline to refine CAMs for better pseudo-labels, which includes: 1) re-training the classification model to generate CAMs; 2) post-processing CAMs to obtain pseudo labels; and 3) training a semantic segmentation model with the obtained pseudo labels. However, this multi-training pipeline requires complicated adjustment and additional time. To address this, we propose a class-conditional inference strategy and an activation aware mask refinement loss function to generate better pseudo labels without re-training the classifier. The class conditional inference-time approach is presented to separately and iteratively reveal the classification network's hidden object activation to generate more complete response maps. Further, our activation aware mask refinement loss function introduces a novel way to exploit saliency maps during segmentation training and refine the foreground object masks without suppressing background objects. Our method achieves superior WSSS results without requiring re-training of the classifier. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 263,496 |
2312.10029 | Challenges with unsupervised LLM knowledge discovery | We show that existing unsupervised methods on large language model (LLM) activations do not discover knowledge -- instead they seem to discover whatever feature of the activations is most prominent. The idea behind unsupervised knowledge elicitation is that knowledge satisfies a consistency structure, which can be used to discover knowledge. We first prove theoretically that arbitrary features (not just knowledge) satisfy the consistency structure of a particular leading unsupervised knowledge-elicitation method, contrast-consistent search (Burns et al. - arXiv:2212.03827). We then present a series of experiments showing settings in which unsupervised methods result in classifiers that do not predict knowledge, but instead predict a different prominent feature. We conclude that existing unsupervised methods for discovering latent knowledge are insufficient, and we contribute sanity checks to apply to evaluating future knowledge elicitation methods. Conceptually, we hypothesise that the identification issues explored here, e.g. distinguishing a model's knowledge from that of a simulated character's, will persist for future unsupervised methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 415,972 |
2202.05798 | The Dual Form of Neural Networks Revisited: Connecting Test Time
Predictions to Training Patterns via Spotlights of Attention | Linear layers in neural networks (NNs) trained by gradient descent can be expressed as a key-value memory system which stores all training datapoints and the initial weights, and produces outputs using unnormalised dot attention over the entire training experience. While this has been technically known since the 1960s, no prior work has effectively studied the operations of NNs in such a form, presumably due to prohibitive time and space complexities and impractical model sizes, all of them growing linearly with the number of training patterns which may get very large. However, this dual formulation offers a possibility of directly visualising how an NN makes use of training patterns at test time, by examining the corresponding attention weights. We conduct experiments on small scale supervised image classification tasks in single-task, multi-task, and continual learning settings, as well as language modelling, and discuss potentials and limits of this view for better understanding and interpreting how NNs exploit training patterns. Our code is public. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 279,990 |
2403.14597 | Extended Reality for Enhanced Human-Robot Collaboration: a
Human-in-the-Loop Approach | The rise of automation has provided an opportunity to achieve higher efficiency in manufacturing processes, yet it often compromises the flexibility required to promptly respond to evolving market needs and meet the demand for customization. Human-robot collaboration attempts to tackle these challenges by combining the strength and precision of machines with human ingenuity and perceptual understanding. In this paper, we conceptualize and propose an implementation framework for an autonomous, machine learning-based manipulator that incorporates human-in-the-loop principles and leverages Extended Reality (XR) to facilitate intuitive communication and programming between humans and robots. Furthermore, the conceptual framework foresees human involvement directly in the robot learning process, resulting in higher adaptability and task generalization. The paper highlights key technologies enabling the proposed framework, emphasizing the importance of developing the digital ecosystem as a whole. Additionally, we review the existent implementation approaches of XR in human-robot collaboration, showcasing diverse perspectives and methodologies. The challenges and future outlooks are discussed, delving into the major obstacles and potential research avenues of XR for more natural human-robot interaction and integration in the industrial landscape. | true | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 440,145 |
2307.05627 | Separate-and-Aggregate: A Transformer-based Patch Refinement Model for
Knowledge Graph Completion | Knowledge graph completion (KGC) is the task of inferencing missing facts from any given knowledge graphs (KG). Previous KGC methods typically represent knowledge graph entities and relations as trainable continuous embeddings and fuse the embeddings of the entity $h$ (or $t$) and relation $r$ into hidden representations of query $(h, r, ?)$ (or $(?, r, t$)) to approximate the missing entities. To achieve this, they either use shallow linear transformations or deep convolutional modules. However, the linear transformations suffer from the expressiveness issue while the deep convolutional modules introduce unnecessary inductive bias, which could potentially degrade the model performance. Thus, we propose a novel Transformer-based Patch Refinement Model (PatReFormer) for KGC. PatReFormer first segments the embedding into a sequence of patches and then employs cross-attention modules to allow bi-directional embedding feature interaction between the entities and relations, leading to a better understanding of the underlying KG. We conduct experiments on four popular KGC benchmarks, WN18RR, FB15k-237, YAGO37 and DB100K. The experimental results show significant performance improvement from existing KGC methods on standard KGC evaluation metrics, e.g., MRR and H@n. Our analysis first verifies the effectiveness of our model design choices in PatReFormer. We then find that PatReFormer can better capture KG information from a large relation embedding dimension. Finally, we demonstrate that the strength of PatReFormer is at complex relation types, compared to other KGC models | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 378,806 |
2501.09294 | Efficient Few-Shot Medical Image Analysis via Hierarchical Contrastive
Vision-Language Learning | Few-shot learning in medical image classification presents a significant challenge due to the limited availability of annotated data and the complex nature of medical imagery. In this work, we propose Adaptive Vision-Language Fine-tuning with Hierarchical Contrastive Alignment (HiCA), a novel framework that leverages the capabilities of Large Vision-Language Models (LVLMs) for medical image analysis. HiCA introduces a two-stage fine-tuning strategy, combining domain-specific pretraining and hierarchical contrastive learning to align visual and textual representations at multiple levels. We evaluate our approach on two benchmark datasets, Chest X-ray and Breast Ultrasound, achieving state-of-the-art performance in both few-shot and zero-shot settings. Further analyses demonstrate the robustness, generalizability, and interpretability of our method, with substantial improvements in performance compared to existing baselines. Our work highlights the potential of hierarchical contrastive strategies in adapting LVLMs to the unique challenges of medical imaging tasks. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 525,086 |
2007.14805 | A framework for optimizing COVID-19 testing policy using a Multi Armed
Bandit approach | Testing is an important part of tackling the COVID-19 pandemic. Availability of testing is a bottleneck due to constrained resources and effective prioritization of individuals is necessary. Here, we discuss the impact of different prioritization policies on COVID-19 patient discovery and the ability of governments and health organizations to use the results for effective decision making. We suggest a framework for testing that balances the maximal discovery of positive individuals with the need for population-based surveillance aimed at understanding disease spread and characteristics. This framework draws from similar approaches to prioritization in the domain of cyber-security based on ranking individuals using a risk score and then reserving a portion of the capacity for random sampling. This approach is an application of Multi-Armed-Bandits maximizing exploration/exploitation of the underlying distribution. We find that individuals can be ranked for effective testing using a few simple features, and that ranking them using such models we can capture 65% (CI: 64.7%-68.3%) of the positive individuals using less than 20% of the testing capacity or 92.1% (CI: 91.1%-93.2%) of positives individuals using 70% of the capacity, allowing reserving a significant portion of the tests for population studies. Our approach allows experts and decision-makers to tailor the resulting policies as needed allowing transparency into the ranking policy and the ability to understand the disease spread in the population and react quickly and in an informed manner. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 189,497 |
2206.02902 | Goal-Space Planning with Subgoal Models | This paper investigates a new approach to model-based reinforcement learning using background planning: mixing (approximate) dynamic programming updates and model-free updates, similar to the Dyna architecture. Background planning with learned models is often worse than model-free alternatives, such as Double DQN, even though the former uses significantly more memory and computation. The fundamental problem is that learned models can be inaccurate and often generate invalid states, especially when iterated many steps. In this paper, we avoid this limitation by constraining background planning to a set of (abstract) subgoals and learning only local, subgoal-conditioned models. This goal-space planning (GSP) approach is more computationally efficient, naturally incorporates temporal abstraction for faster long-horizon planning and avoids learning the transition dynamics entirely. We show that our GSP algorithm can propagate value from an abstract space in a manner that helps a variety of base learners learn significantly faster in different domains. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 301,069 |
2412.11412 | V-MIND: Building Versatile Monocular Indoor 3D Detector with Diverse 2D
Annotations | The field of indoor monocular 3D object detection is gaining significant attention, fueled by the increasing demand in VR/AR and robotic applications. However, its advancement is impeded by the limited availability and diversity of 3D training data, owing to the labor-intensive nature of 3D data collection and annotation processes. In this paper, we present V-MIND (Versatile Monocular INdoor Detector), which enhances the performance of indoor 3D detectors across a diverse set of object classes by harnessing publicly available large-scale 2D datasets. By leveraging well-established monocular depth estimation techniques and camera intrinsic predictors, we can generate 3D training data by converting large-scale 2D images into 3D point clouds and subsequently deriving pseudo 3D bounding boxes. To mitigate distance errors inherent in the converted point clouds, we introduce a novel 3D self-calibration loss for refining the pseudo 3D bounding boxes during training. Additionally, we propose a novel ambiguity loss to address the ambiguity that arises when introducing new classes from 2D datasets. Finally, through joint training with existing 3D datasets and pseudo 3D bounding boxes derived from 2D datasets, V-MIND achieves state-of-the-art object detection performance across a wide range of classes on the Omni3D indoor dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 517,394 |
2410.12265 | An Automatic and Cost-Efficient Peer-Review Framework for Language
Generation Evaluation | With the rapid development of large language models (LLMs), how to efficiently evaluate them has become an important research question. Existing evaluation methods often suffer from high costs, limited test formats, the need of human references, and systematic evaluation biases. To address these limitations, our study introduces the Auto-PRE, an automatic LLM evaluation framework based on peer review. In contrast to previous studies that rely on human annotations, Auto-PRE selects evaluator LLMs automatically based on their inherent traits including consistency, self-confidence, and pertinence. We conduct extensive experiments on three tasks: summary generation, non-factoid question-answering, and dialogue generation. Experimental results indicate our Auto-PRE achieves state-of-the-art performance at a lower cost. Moreover, our study highlights the impact of prompt strategies and evaluation formats on evaluation performance, offering guidance for method optimization in the future. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 498,939 |
2310.15950 | Representation Learning with Large Language Models for Recommendation | Recommender systems have seen significant advancements with the influence of deep learning and graph neural networks, particularly in capturing complex user-item relationships. However, these graph-based recommenders heavily depend on ID-based data, potentially disregarding valuable textual information associated with users and items, resulting in less informative learned representations. Moreover, the utilization of implicit feedback data introduces potential noise and bias, posing challenges for the effectiveness of user preference learning. While the integration of large language models (LLMs) into traditional ID-based recommenders has gained attention, challenges such as scalability issues, limitations in text-only reliance, and prompt input constraints need to be addressed for effective implementation in practical recommender systems. To address these challenges, we propose a model-agnostic framework RLMRec that aims to enhance existing recommenders with LLM-empowered representation learning. It proposes a recommendation paradigm that integrates representation learning with LLMs to capture intricate semantic aspects of user behaviors and preferences. RLMRec incorporates auxiliary textual signals, develops a user/item profiling paradigm empowered by LLMs, and aligns the semantic space of LLMs with the representation space of collaborative relational signals through a cross-view alignment framework. This work further establish a theoretical foundation demonstrating that incorporating textual signals through mutual information maximization enhances the quality of representations. In our evaluation, we integrate RLMRec with state-of-the-art recommender models, while also analyzing its efficiency and robustness to noise data. Our implementation codes are available at https://github.com/HKUDS/RLMRec. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 402,513 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.