id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2011.05180
Generation of Human-aware Navigation Maps using Graph Neural Networks
Minimising the discomfort caused by robots when navigating in social situations is crucial for them to be accepted. The paper presents a machine learning-based framework that bootstraps existing one-dimensional datasets to generate a cost map dataset and a model combining Graph Neural Network and Convolutional Neural Network layers to produce cost maps for human-aware navigation in real-time. The proposed framework is evaluated against the original one-dimensional dataset and in simulated navigation tasks. The results outperform similar state-of-the-art-methods considering the accuracy on the dataset and the navigation metrics used. The applications of the proposed framework are not limited to human-aware navigation, it could be applied to other fields where map generation is needed.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
205,827
2003.10526
Hessian metric via transport information geometry
We propose to study the Hessian metric of a functional on the space of probability measures endowed with the Wasserstein $2$-metric. We name it transport Hessian metric, which contains and extends the classical Wasserstein-$2$ metric. We formulate several dynamical systems associated with transport Hessian metrics. Several connections between transport Hessian metrics and mathematical physics equations are discovered. E.g., the transport Hessian gradient flow, including Newton's flow, formulates a mean-field kernel Stein variational gradient flow; The transport Hessian Hamiltonian flow of Boltzmann-Shannon entropy forms the Shallow water equation; The transport Hessian gradient flow of Fisher information leads to the heat equation. Several examples and closed-form solutions for transport Hessian distances are presented.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
169,357
2208.09550
Sudakov-Fernique post-AMP, and a new proof of the local convexity of the TAP free energy
In many problems in modern statistics and machine learning, it is often of interest to establish that a first order method on a non-convex risk function eventually enters a region of parameter space in which the risk is locally convex. We derive an asymptotic comparison inequality, which we call the Sudakov-Fernique post-AMP inequality, which, in a certain class of problems involving a GOE matrix, is able to probe properties of an optimization landscape locally around the iterates of an approximate message passing (AMP) algorithm. As an example of its use, we provide a new, and arguably simpler, proof of some of the results of Celentano et al. (2021), which establishes that the so-called TAP free energy in the $\mathbb{Z}_2$-synchronization problem is locally convex in the region to which AMP converges. We further prove a conjecture of El Alaoui et al. (2022) involving the local convexity of a related but distinct TAP free energy, which, as a consequence, confirms that their algorithm efficiently samples from the Sherrington-Kirkpatrick Gibbs measure throughout the "easy" regime.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
313,732
2108.05891
Page-level Optimization of e-Commerce Item Recommendations
The item details page (IDP) is a web page on an e-commerce website that provides information on a specific product or item listing. Just below the details of the item on this page, the buyer can usually find recommendations for other relevant items. These are typically in the form of a series of modules or carousels, with each module containing a set of recommended items. The selection and ordering of these item recommendation modules are intended to increase discover-ability of relevant items and encourage greater user engagement, while simultaneously showcasing diversity of inventory and satisfying other business objectives. Item recommendation modules on the IDP are often curated and statically configured for all customers, ignoring opportunities for personalization. In this paper, we present a scalable end-to-end production system to optimize the personalized selection and ordering of item recommendation modules on the IDP in real-time by utilizing deep neural networks. Through extensive offline experimentation and online A/B testing, we show that our proposed system achieves significantly higher click-through and conversion rates compared to other existing methods. In our online A/B test, our framework improved click-through rate by 2.48% and purchase-through rate by 7.34% over a static configuration.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
250,449
2401.15377
Validation of artificial neural networks to model the acoustic behaviour of induction motors
In the last decade, the sound quality of electric induction motors is a hot topic in the research field. Specially, due to its high number of applications, the population is exposed to physical and psychological discomfort caused by the noise emission. Therefore, it is necessary to minimise its psychological impact on the population. In this way, the main goal of this work is to evaluate the use of multitask artificial neural networks as a modelling technique for simultaneously predicting psychoacoustic parameters of induction motors. Several inputs are used, such as, the electrical magnitudes of the motor power signal and the number of poles, instead of separating the noise of the electric motor from the environmental noise. Two different kind of artificial neural networks are proposed to evaluate the acoustic quality of induction motors, by using the equivalent sound pressure, the loudness, the roughness and the sharpness as outputs. Concretely, two different topologies have been considered: simple models and more complex models. The former are more interpretable, while the later lead to higher accuracy at the cost of hiding the cause-effect relationship. Focusing on the simple interpretable models, product unit neural networks achieved the best results: for MSE and for SEP. The main benefit of this product unit model is its simplicity, since only 10 inputs variables are used, outlining the effective transfer mechanism of multitask artificial neural networks to extract common features of multiple tasks. Finally, a deep analysis of the acoustic quality of induction motors in done using the best product unit neural networks.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
424,429
2410.18111
Data Efficiency for Large Recommendation Models
Large recommendation models (LRMs) are fundamental to the multi-billion dollar online advertising industry, processing massive datasets of hundreds of billions of examples before transitioning to continuous online training to adapt to rapidly changing user behavior. The massive scale of data directly impacts both computational costs and the speed at which new methods can be evaluated (R&D velocity). This paper presents actionable principles and high-level frameworks to guide practitioners in optimizing training data requirements. These strategies have been successfully deployed in Google's largest Ads CTR prediction models and are broadly applicable beyond LRMs. We outline the concept of data convergence, describe methods to accelerate this convergence, and finally, detail how to optimally balance training data volume with model size.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
501,754
1909.06296
Bayesian parameter estimation using conditional variational autoencoders for gravitational-wave astronomy
Gravitational wave (GW) detection is now commonplace and as the sensitivity of the global network of GW detectors improves, we will observe $\mathcal{O}(100)$s of transient GW events per year. The current methods used to estimate their source parameters employ optimally sensitive but computationally costly Bayesian inference approaches where typical analyses have taken between 6 hours and 5 days. For binary neutron star and neutron star black hole systems prompt counterpart electromagnetic (EM) signatures are expected on timescales of 1 second -- 1 minute and the current fastest method for alerting EM follow-up observers, can provide estimates in $\mathcal{O}(1)$ minute, on a limited range of key source parameters. Here we show that a conditional variational autoencoder pre-trained on binary black hole signals can return Bayesian posterior probability estimates. The training procedure need only be performed once for a given prior parameter space and the resulting trained machine can then generate samples describing the posterior distribution $\sim 6$ orders of magnitude faster than existing techniques.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
145,338
2202.00076
Optimal Estimation of Off-Policy Policy Gradient via Double Fitted Iteration
Policy gradient (PG) estimation becomes a challenge when we are not allowed to sample with the target policy but only have access to a dataset generated by some unknown behavior policy. Conventional methods for off-policy PG estimation often suffer from either significant bias or exponentially large variance. In this paper, we propose the double Fitted PG estimation (FPG) algorithm. FPG can work with an arbitrary policy parameterization, assuming access to a Bellman-complete value function class. In the case of linear value function approximation, we provide a tight finite-sample upper bound on policy gradient estimation error, that is governed by the amount of distribution mismatch measured in feature space. We also establish the asymptotic normality of FPG estimation error with a precise covariance characterization, which is further shown to be statistically optimal with a matching Cramer-Rao lower bound. Empirically, we evaluate the performance of FPG on both policy gradient estimation and policy optimization, using either softmax tabular or ReLU policy networks. Under various metrics, our results show that FPG significantly outperforms existing off-policy PG estimation methods based on importance sampling and variance reduction techniques.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
278,011
2312.06674
Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations
We introduce Llama Guard, an LLM-based input-output safeguard model geared towards Human-AI conversation use cases. Our model incorporates a safety risk taxonomy, a valuable tool for categorizing a specific set of safety risks found in LLM prompts (i.e., prompt classification). This taxonomy is also instrumental in classifying the responses generated by LLMs to these prompts, a process we refer to as response classification. For the purpose of both prompt and response classification, we have meticulously gathered a dataset of high quality. Llama Guard, a Llama2-7b model that is instruction-tuned on our collected dataset, albeit low in volume, demonstrates strong performance on existing benchmarks such as the OpenAI Moderation Evaluation dataset and ToxicChat, where its performance matches or exceeds that of currently available content moderation tools. Llama Guard functions as a language model, carrying out multi-class classification and generating binary decision scores. Furthermore, the instruction fine-tuning of Llama Guard allows for the customization of tasks and the adaptation of output formats. This feature enhances the model's capabilities, such as enabling the adjustment of taxonomy categories to align with specific use cases, and facilitating zero-shot or few-shot prompting with diverse taxonomies at the input. We are making Llama Guard model weights available and we encourage researchers to further develop and adapt them to meet the evolving needs of the community for AI safety.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
414,627
2303.02411
The Contribution of Knowledge in Visiolinguistic Learning: A Survey on Tasks and Challenges
Recent advancements in visiolinguistic (VL) learning have allowed the development of multiple models and techniques that offer several impressive implementations, able to currently resolve a variety of tasks that require the collaboration of vision and language. Current datasets used for VL pre-training only contain a limited amount of visual and linguistic knowledge, thus significantly limiting the generalization capabilities of many VL models. External knowledge sources such as knowledge graphs (KGs) and Large Language Models (LLMs) are able to cover such generalization gaps by filling in missing knowledge, resulting in the emergence of hybrid architectures. In the current survey, we analyze tasks that have benefited from such hybrid approaches. Moreover, we categorize existing knowledge sources and types, proceeding to discussion regarding the KG vs LLM dilemma and its potential impact to future hybrid approaches.
false
false
false
false
true
false
false
false
true
false
false
true
false
false
false
false
false
false
349,345
2009.14684
Benchmark for Anonymous Video Analytics
Out-of-home audience measurement aims to count and characterize the people exposed to advertising content in the physical world. While audience measurement solutions based on computer vision are of increasing interest, no commonly accepted benchmark exists to evaluate and compare their performance. In this paper, we propose the first benchmark for digital out-of-home audience measurement that evaluates the vision-based tasks of audience localization and counting, and audience demographics. The benchmark is composed of a novel, dataset captured at multiple locations and a set of performance measures. Using the benchmark, we present an in-depth comparison of eight open-source algorithms on four hardware platforms with GPU and CPU-optimized inferences and of two commercial off-the-shelf solutions for localization, count, age, and gender estimation. This benchmark and related open-source codes are available at http://ava.eecs.qmul.ac.uk.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
198,110
2401.05159
Derm-T2IM: Harnessing Synthetic Skin Lesion Data via Stable Diffusion Models for Enhanced Skin Disease Classification using ViT and CNN
This study explores the utilization of Dermatoscopic synthetic data generated through stable diffusion models as a strategy for enhancing the robustness of machine learning model training. Synthetic data generation plays a pivotal role in mitigating challenges associated with limited labeled datasets, thereby facilitating more effective model training. In this context, we aim to incorporate enhanced data transformation techniques by extending the recent success of few-shot learning and a small amount of data representation in text-to-image latent diffusion models. The optimally tuned model is further used for rendering high-quality skin lesion synthetic data with diverse and realistic characteristics, providing a valuable supplement and diversity to the existing training data. We investigate the impact of incorporating newly generated synthetic data into the training pipeline of state-of-art machine learning models, assessing its effectiveness in enhancing model performance and generalization to unseen real-world data. Our experimental results demonstrate the efficacy of the synthetic data generated through stable diffusion models helps in improving the robustness and adaptability of end-to-end CNN and vision transformer models on two different real-world skin lesion datasets.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
420,661
2109.09862
Language Identification with a Reciprocal Rank Classifier
Language identification is a critical component of language processing pipelines (Jauhiainen et al.,2019) and is not a solved problem in real-world settings. We present a lightweight and effective language identifier that is robust to changes of domain and to the absence of copious training data. The key idea for classification is that the reciprocal of the rank in a frequency table makes an effective additive feature score, hence the term Reciprocal Rank Classifier (RRC). The key finding for language classification is that ranked lists of words and frequencies of characters form a sufficient and robust representation of the regularities of key languages and their orthographies. We test this on two 22-language data sets and demonstrate zero-effort domain adaptation from a Wikipedia training set to a Twitter test set. When trained on Wikipedia but applied to Twitter the macro-averaged F1-score of a conventionally trained SVM classifier drops from 90.9% to 77.7%. By contrast, the macro F1-score of RRC drops only from 93.1% to 90.6%. These classifiers are compared with those from fastText and langid. The RRC performs better than these established systems in most experiments, especially on short Wikipedia texts and Twitter. The RRC classifier can be improved for particular domains and conversational situations by adding words to the ranked lists. Using new terms learned from such conversations, we demonstrate a further 7.9% increase in accuracy of sample message classification, and 1.7% increase for conversation classification. Surprisingly, this made results on Twitter data slightly worse. The RRC classifier is available as an open source Python package (https://github.com/LivePersonInc/lplangid).
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
256,427
1401.2692
On the Optimality of Treating Interference as Noise for $K$ user Parallel Gaussian Interference Networks
It has been shown recently by Geng et al. that in a $K$ user Gaussian interference network, if for each user the desired signal strength is no less than the sum of the strengths of the strongest interference from this user and the strongest interference to this user (all signal strengths measured in dB scale), then power control and treating interference as noise (TIN) is sufficient to achieve the entire generalized degrees of freedom (GDoF) region. Motivated by the intuition that the deterministic model of Avestimehr et al. (ADT deterministic model) is particularly suited for exploring the optimality of TIN, the results of Geng et al. are first re-visited under the ADT deterministic model, and are shown to directly translate between the Gaussian and deterministic settings. Next, we focus on the extension of these results to parallel interference networks, from a sum-capacity/sum-GDoF perspective. To this end, we interpret the explicit characterization of the sum-capacity/sum-GDoF of a TIN optimal network (without parallel channels) as a minimum weighted matching problem in combinatorial optimization, and obtain a simple characterization in terms of a partition of the interference network into vertex-disjoint cycles. Aided by insights from the cyclic partition, the sum-capacity optimality of TIN for $K$ user parallel interference networks is characterized for the ADT deterministic model, leading ultimately to corresponding GDoF results for the Gaussian setting. In both cases, subject to a mild invertibility condition the optimality of TIN is shown to extend to parallel networks in a separable fashion.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
29,777
2409.19014
FLEX: Expert-level False-Less EXecution Metric for Reliable Text-to-SQL Benchmark
Text-to-SQL systems have become crucial for translating natural language into SQL queries in various industries, enabling non-technical users to perform complex data operations. The need for accurate evaluation methods has increased as these systems have grown more sophisticated. However, the Execution Accuracy (EX), the most prevalent evaluation metric, still shows many false positives and negatives. Thus, this paper introduces FLEX (False-Less EXecution), a novel approach to evaluating text-to-SQL systems using large language models (LLMs) to emulate human expert-level evaluation of SQL queries. Our metric improves agreement with human experts (from 62 to 87.04 in Cohen's kappa) with comprehensive context and sophisticated criteria. Our extensive experiments yield several key insights: (1) Models' performance increases by over 2.6 points on average, substantially affecting rankings on Spider and BIRD benchmarks; (2) The underestimation of models in EX primarily stems from annotation quality issues; and (3) Model performance on particularly challenging questions tends to be overestimated. This work contributes to a more accurate and nuanced evaluation of text-to-SQL systems, potentially reshaping our understanding of state-of-the-art performance in this field.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
492,508
2009.11943
A distributed service-matching coverage via heterogeneous mobile agents
We propose a distributed deployment solution for a group of mobile agents that should provide a service for a dense set of targets. The agents are heterogeneous in a sense that their quality of service (QoS), modeled as a spatial Gaussian distribution, is different. To provide the best service, the objective is to deploy the agents such that their collective QoS distribution is as close as possible to the density distribution of the targets. We propose a distributed consensus-based expectation-maximization (EM) algorithm to estimate the target density distribution, modeled as a Gaussian mixture model (GMM). The GMM not only gives an estimate of the targets' distribution, but also partitions the area to subregions, each of which is represented by one of the GMM's Gaussian bases. We use the Kullback-Leibler divergence (KLD) to evaluate the similarity between the QoS distribution of each agent and each Gaussian basis/subregion. Then, a distributed assignment problem is formulated and solved as a discrete optimal mass transport problem that allocates each agent to a subregion by taking the KLD as the assignment cost. We demonstrate our results by a sensor deployment for event detection where the sensor's QoS is modeled as an anisotropic Gaussian distribution.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
197,286
2012.12585
Accurate evaluation of integrals in slender-body formulations for fibers in viscous flow
A non-local slender body approximation for slender flexible fibers in Stokes flow can be derived, yielding an integral equation along the center lines of the fibers that involves a slenderness parameter. The formulation contains a so-called finite part singular integral, and can in the case of several fibers or evaluation of the flow field require the evaluation of nearly singular integrals. We introduce a numerical technique to accurately and efficiently evaluate the finite part integral. This technique can be applied combined with any panel based quadrature rule and will add no additional cost except for a small precomputation of modified quadrature weights. We also show how a related technique that was recently introduced can be applied for the evaluation of the nearly singular integrals.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
212,986
1612.03901
Enhancing the Physical Layer Security of Non-orthogonal Multiple Access in Large-Scale Networks
This paper investigates the physical layer security of non-orthogonal multiple access (NOMA) in large-scale networks with invoking stochastic geometry. Both single-antenna and multiple-antenna aided transmission scenarios are considered, where the base station (BS) communicates with randomly distributed NOMA users. In the single-antenna scenario, we adopt a protected zone around the BS to establish an eavesdropper-exclusion area with the aid of careful channel-ordering of the NOMA users. In the multiple-antenna scenario, artificial noise is generated at the BS for further improving the security of a beamforming-aided system. In order to characterize the secrecy performance, we derive new exact expressions of the security outage probability for both single-antenna and multiple-antenna aided scenarios. To obtain further insights, 1) for the single antenna scenario, we perform secrecy diversity order analysis of the selected user pair. The analytical results derived demonstrate that the secrecy diversity order is determined by the specific user having the worse channel condition among the selected user pair; and 2) for the multiple-antenna scenario, we derive the asymptotic secrecy outage probability, when the number of transmit antennas tends to infinity. Monte Carlo simulations are provided for verifying the analytical results derived and to show that: i)~The security performance of the NOMA networks can be improved by invoking the protected zone and by generating artificial noise at the BS; and ii)~The asymptotic secrecy outage probability is close to the exact secrecy outage probability.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
65,439
2206.13517
ProGen2: Exploring the Boundaries of Protein Language Models
Attention-based models trained on protein sequences have demonstrated incredible success at classification and generation tasks relevant for artificial intelligence-driven protein design. However, we lack a sufficient understanding of how very large-scale models and data play a role in effective protein model development. We introduce a suite of protein language models, named ProGen2, that are scaled up to 6.4B parameters and trained on different sequence datasets drawn from over a billion proteins from genomic, metagenomic, and immune repertoire databases. ProGen2 models show state-of-the-art performance in capturing the distribution of observed evolutionary sequences, generating novel viable sequences, and predicting protein fitness without additional finetuning. As large model sizes and raw numbers of protein sequences continue to become more widely accessible, our results suggest that a growing emphasis needs to be placed on the data distribution provided to a protein sequence model. We release the ProGen2 models and code at https://github.com/salesforce/progen.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
305,007
2406.04467
Small-E: Small Language Model with Linear Attention for Efficient Speech Synthesis
Recent advancements in text-to-speech (TTS) powered by language models have showcased remarkable capabilities in achieving naturalness and zero-shot voice cloning. Notably, the decoder-only transformer is the prominent architecture in this domain. However, transformers face challenges stemming from their quadratic complexity in sequence length, impeding training on lengthy sequences and resource-constrained hardware. Moreover they lack specific inductive bias with regards to the monotonic nature of TTS alignments. In response, we propose to replace transformers with emerging recurrent architectures and introduce specialized cross-attention mechanisms for reducing repeating and skipping issues. Consequently our architecture can be efficiently trained on long samples and achieve state-of-the-art zero-shot voice cloning against baselines of comparable size. Our implementation and demos are available at https://github.com/theodorblackbird/lina-speech.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
461,691
2404.11207
Exploring the Transferability of Visual Prompting for Multimodal Large Language Models
Although Multimodal Large Language Models (MLLMs) have demonstrated promising versatile capabilities, their performance is still inferior to specialized models on downstream tasks, which makes adaptation necessary to enhance their utility. However, fine-tuning methods require independent training for every model, leading to huge computation and memory overheads. In this paper, we propose a novel setting where we aim to improve the performance of diverse MLLMs with a group of shared parameters optimized for a downstream task. To achieve this, we propose Transferable Visual Prompting (TVP), a simple and effective approach to generate visual prompts that can transfer to different models and improve their performance on downstream tasks after trained on only one model. We introduce two strategies to address the issue of cross-model feature corruption of existing visual prompting methods and enhance the transferability of the learned prompts, including 1) Feature Consistency Alignment: which imposes constraints to the prompted feature changes to maintain task-agnostic knowledge; 2) Task Semantics Enrichment: which encourages the prompted images to contain richer task-specific semantics with language guidance. We validate the effectiveness of TVP through extensive experiments with 6 modern MLLMs on a wide variety of tasks ranging from object recognition and counting to multimodal reasoning and hallucination correction.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
447,421
2405.13048
Human-Generative AI Collaborative Problem Solving Who Leads and How Students Perceive the Interactions
This research investigates distinct human-generative AI collaboration types and students' interaction experiences when collaborating with generative AI (i.e., ChatGPT) for problem-solving tasks and how these factors relate to students' sense of agency and perceived collaborative problem solving. By analyzing the surveys and reflections of 79 undergraduate students, we identified three human-generative AI collaboration types: even contribution, human leads, and AI leads. Notably, our study shows that 77.21% of students perceived they led or had even contributed to collaborative problem-solving when collaborating with ChatGPT. On the other hand, 15.19% of the human participants indicated that the collaborations were led by ChatGPT, indicating a potential tendency for students to rely on ChatGPT. Furthermore, 67.09% of students perceived their interaction experiences with ChatGPT to be positive or mixed. We also found a positive correlation between positive interaction experience and a sense of positive agency. The results of this study contribute to our understanding of the collaboration between students and generative AI and highlight the need to study further why some students let ChatGPT lead collaborative problem-solving and how to enhance their interaction experience through curriculum and technology design.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
455,775
2104.09399
TREC Deep Learning Track: Reusable Test Collections in the Large Data Regime
The TREC Deep Learning (DL) Track studies ad hoc search in the large data regime, meaning that a large set of human-labeled training data is available. Results so far indicate that the best models with large data may be deep neural networks. This paper supports the reuse of the TREC DL test collections in three ways. First we describe the data sets in detail, documenting clearly and in one place some details that are otherwise scattered in track guidelines, overview papers and in our associated MS MARCO leaderboard pages. We intend this description to make it easy for newcomers to use the TREC DL data. Second, because there is some risk of iteration and selection bias when reusing a data set, we describe the best practices for writing a paper using TREC DL data, without overfitting. We provide some illustrative analysis. Finally we address a number of issues around the TREC DL data, including an analysis of reusability.
false
false
false
false
true
true
true
false
false
false
false
false
false
false
false
false
false
false
231,226
2406.00375
Teledrive: An Embodied AI based Telepresence System
This article presents Teledrive, a telepresence robotic system with embodied AI features that empowers an operator to navigate the telerobot in any unknown remote place with minimal human intervention. We conceive Teledrive in the context of democratizing remote care-giving for elderly citizens as well as for isolated patients, affected by contagious diseases. In particular, this paper focuses on the problem of navigating to a rough target area (like bedroom or kitchen) rather than pre-specified point destinations. This ushers in a unique AreaGoal based navigation feature, which has not been explored in depth in the contemporary solutions. Further, we describe an edge computing-based software system built on a WebRTC-based communication framework to realize the aforementioned scheme through an easy-to-use speech-based human-robot interaction. Moreover, to enhance the ease of operation for the remote caregiver, we incorporate a person following feature, whereby a robot follows a person on the move in its premises as directed by the operator. Moreover, the system presented is loosely coupled with specific robot hardware, unlike the existing solutions. We have evaluated the efficacy of the proposed system through baseline experiments, user study, and real-life deployment.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
459,822
2406.12703
Coarse-Fine Spectral-Aware Deformable Convolution For Hyperspectral Image Reconstruction
We study the inverse problem of Coded Aperture Snapshot Spectral Imaging (CASSI), which captures a spatial-spectral data cube using snapshot 2D measurements and uses algorithms to reconstruct 3D hyperspectral images (HSI). However, current methods based on Convolutional Neural Networks (CNNs) struggle to capture long-range dependencies and non-local similarities. The recently popular Transformer-based methods are poorly deployed on downstream tasks due to the high computational cost caused by self-attention. In this paper, we propose Coarse-Fine Spectral-Aware Deformable Convolution Network (CFSDCN), applying deformable convolutional networks (DCN) to this task for the first time. Considering the sparsity of HSI, we design a deformable convolution module that exploits its deformability to capture long-range dependencies and non-local similarities. In addition, we propose a new spectral information interaction module that considers both coarse-grained and fine-grained spectral similarities. Extensive experiments demonstrate that our CFSDCN significantly outperforms previous state-of-the-art (SOTA) methods on both simulated and real HSI datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
465,536
2407.10954
A Unified Differentiable Boolean Operator with Fuzzy Logic
This paper presents a unified differentiable boolean operator for implicit solid shape modeling using Constructive Solid Geometry (CSG). Traditional CSG relies on min, max operators to perform boolean operations on implicit shapes. But because these boolean operators are discontinuous and discrete in the choice of operations, this makes optimization over the CSG representation challenging. Drawing inspiration from fuzzy logic, we present a unified boolean operator that outputs a continuous function and is differentiable with respect to operator types. This enables optimization of both the primitives and the boolean operations employed in CSG with continuous optimization techniques, such as gradient descent. We further demonstrate that such a continuous boolean operator allows modeling of both sharp mechanical objects and smooth organic shapes with the same framework. Our proposed boolean operator opens up new possibilities for future research toward fully continuous CSG optimization.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
473,196
1706.09829
Towards Monocular Vision based Obstacle Avoidance through Deep Reinforcement Learning
Obstacle avoidance is a fundamental requirement for autonomous robots which operate in, and interact with, the real world. When perception is limited to monocular vision avoiding collision becomes significantly more challenging due to the lack of 3D information. Conventional path planners for obstacle avoidance require tuning a number of parameters and do not have the ability to directly benefit from large datasets and continuous use. In this paper, a dueling architecture based deep double-Q network (D3QN) is proposed for obstacle avoidance, using only monocular RGB vision. Based on the dueling and double-Q mechanisms, D3QN can efficiently learn how to avoid obstacles in a simulator even with very noisy depth information predicted from RGB image. Extensive experiments show that D3QN enables twofold acceleration on learning compared with a normal deep Q network and the models trained solely in virtual environments can be directly transferred to real robots, generalizing well to various new environments with previously unseen dynamic objects.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
76,208
1210.4695
Regulating the information in spikes: a useful bias
The bias/variance tradeoff is fundamental to learning: increasing a model's complexity can improve its fit on training data, but potentially worsens performance on future samples. Remarkably, however, the human brain effortlessly handles a wide-range of complex pattern recognition tasks. On the basis of these conflicting observations, it has been argued that useful biases in the form of "generic mechanisms for representation" must be hardwired into cortex (Geman et al). This note describes a useful bias that encourages cooperative learning which is both biologically plausible and rigorously justified.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
19,156
1409.5340
Belief revision by examples
A common assumption in belief revision is that the reliability of the information sources is either given, derived from temporal information, or the same for all. This article does not describe a new semantics for integration but the problem of obtaining the reliability of the sources given the result of a previous merging. As an example, the relative reliability of two sensors can be assessed given some certain observation, and allows for subsequent mergings of data coming from them.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
36,160
2309.03787
USA: Universal Sentiment Analysis Model & Construction of Japanese Sentiment Text Classification and Part of Speech Dataset
Sentiment analysis is a pivotal task in the domain of natural language processing. It encompasses both text-level sentiment polarity classification and word-level Part of Speech(POS) sentiment polarity determination. Such analysis challenges models to understand text holistically while also extracting nuanced information. With the rise of Large Language Models(LLMs), new avenues for sentiment analysis have opened. This paper proposes enhancing performance by leveraging the Mutual Reinforcement Effect(MRE) between individual words and the overall text. It delves into how word polarity influences the overarching sentiment of a passage. To support our research, we annotated four novel Sentiment Text Classification and Part of Speech(SCPOS) datasets, building upon existing sentiment classification datasets. Furthermore, we developed a Universal Sentiment Analysis(USA) model, with a 7-billion parameter size. Experimental results revealed that our model surpassed the performance of gpt-3.5-turbo across all four datasets, underscoring the significance of MRE in sentiment analysis.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
390,508
1902.07830
Deep Multi-modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges
Recent advancements in perception for autonomous driving are driven by deep learning. In order to achieve robust and accurate scene understanding, autonomous vehicles are usually equipped with different sensors (e.g. cameras, LiDARs, Radars), and multiple sensing modalities can be fused to exploit their complementary properties. In this context, many methods have been proposed for deep multi-modal perception problems. However, there is no general guideline for network architecture design, and questions of "what to fuse", "when to fuse", and "how to fuse" remain open. This review paper attempts to systematically summarize methodologies and discuss challenges for deep multi-modal object detection and semantic segmentation in autonomous driving. To this end, we first provide an overview of on-board sensors on test vehicles, open datasets, and background information for object detection and semantic segmentation in autonomous driving research. We then summarize the fusion methodologies and discuss challenges and open questions. In the appendix, we provide tables that summarize topics and methods. We also provide an interactive online platform to navigate each reference: https://boschresearch.github.io/multimodalperception/.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
122,066
1608.07017
Ambient Sound Provides Supervision for Visual Learning
The sound of crashing waves, the roar of fast-moving cars -- sound conveys important information about the objects in our surroundings. In this work, we show that ambient sounds can be used as a supervisory signal for learning visual models. To demonstrate this, we train a convolutional neural network to predict a statistical summary of the sound associated with a video frame. We show that, through this process, the network learns a representation that conveys information about objects and scenes. We evaluate this representation on several recognition tasks, finding that its performance is comparable to that of other state-of-the-art unsupervised learning methods. Finally, we show through visualizations that the network learns units that are selective to objects that are often associated with characteristic sounds.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
60,186
2208.07442
Viability of Robot-supported Flipped Classes in English for Medical Use Reading Comprehension
This study delved into the viability of Robot-supported flipped classes in English for Medical Purposes reading comprehension. In a 16-session course, the reading comprehension and then workspace performance of 444 students, with Commercially-Off-The-Shelf and Self-Generated robot flipped classes were compared. The results indicated that the flipped classes brought about a good instructional-learning ambience in postsecondary education for English for Medical Purposes (EMP) reading comprehension and adopting proactive approach for workspace performance. In tandem, the Mixed Effect Model revealed that student participation in the self-generated robot-supported flipped classes yielded a larger effect size (+17.6%) than Commercially-Off-The-Shelf robot-supported flipped classes. Analyses produced five contributing moderators of EMP reading comprehension and workspace performance: reading proficiency, attitude, manner of practicing, as well as student and teacher role.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
313,045
2205.08480
Effort Informed Roadmaps (EIRM*): Efficient Asymptotically Optimal Multiquery Planning by Actively Reusing Validation Effort
Multiquery planning algorithms find paths between various different starts and goals in a single search space. They are designed to do so efficiently by reusing information across planning queries. This information may be computed before or during the search and often includes knowledge of valid paths. Using known valid paths to solve an individual planning query takes less computational effort than finding a completely new solution. This allows multiquery algorithms, such as PRM*, to outperform single-query algorithms, such as RRT*, on many problems but their relative performance depends on how much information is reused. Despite this, few multiquery planners explicitly seek to maximize path reuse and, as a result, many do not consistently outperform single-query alternatives. This paper presents Effort Informed Roadmaps (EIRM*), an almost-surely asymptotically optimal multiquery planning algorithm that explicitly prioritizes reusing computational effort. EIRM* uses an asymmetric bidirectional search to identify existing paths that may help solve an individual planning query and then uses this information to order its search and reduce computational effort. This allows it to find initial solutions up to an order-of-magnitude faster than state-of-the-art planning algorithms on the tested abstract and robotic multiquery planning problems.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
296,956
2006.05623
Training with Multi-Layer Embeddings for Model Reduction
Modern recommendation systems rely on real-valued embeddings of categorical features. Increasing the dimension of embedding vectors improves model accuracy but comes at a high cost to model size. We introduce a multi-layer embedding training (MLET) architecture that trains embeddings via a sequence of linear layers to derive superior embedding accuracy vs. model size trade-off. Our approach is fundamentally based on the ability of factorized linear layers to produce superior embeddings to that of a single linear layer. We focus on the analysis and implementation of a two-layer scheme. Harnessing the recent results in dynamics of backpropagation in linear neural networks, we explain the ability to get superior multi-layer embeddings via their tendency to have lower effective rank. We show that substantial advantages are obtained in the regime where the width of the hidden layer is much larger than that of the final embedding (d). Crucially, at conclusion of training, we convert the two-layer solution into a single-layer one: as a result, the inference-time model size scales as d. We prototype the MLET scheme within Facebook's PyTorch-based open-source Deep Learning Recommendation Model. We show that it allows reducing d by 4-8X, with a corresponding improvement in memory footprint, at given model accuracy. The experiments are run on two publicly available click-through-rate prediction benchmarks (Criteo-Kaggle and Avazu). The runtime cost of MLET is 25%, on average.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
181,143
2005.11348
Microphone Array Based Surveillance Audio Classification
The work assessed seven classical classifiers and two beamforming algorithms for detecting surveillance sound events. The tests included the use of AWGN with -10 dB to 30 dB SNR. Data Augmentation was also employed to improve algorithms' performance. The results showed that the combination of SVM and Delay-and-Sum (DaS) scored the best accuracy (up to 86.0\%), but had high computational cost ($\approx $ 402 ms), mainly due to DaS. The use of SGD also seems to be a good alternative since it has achieved good accuracy either (up to 85.3\%), but with quicker processing time ($\approx$ 165 ms).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
178,446
1910.11691
Improving Diarization Robustness using Diversification, Randomization and the DOVER Algorithm
Speaker diarization based on bottom-up clustering of speech segments by acoustic similarity is often highly sensitive to the choice of hyperparameters, such as the initial number of clusters and feature weighting. Optimizing these hyperparameters is difficult and often not robust across different data sets. We recently proposed the DOVER algorithm for combining multiple diarization hypotheses by voting. Here we propose to mitigate the robustness problem in diarization by using DOVER to average across different parameter choices. We also investigate the combination of diverse outputs obtained by following different merge choices pseudo-randomly in the course of clustering, thereby mitigating the greediness of best-first clustering. We show on two conference meeting data sets drawn from NIST evaluations that the proposed methods indeed yield more robust, and in several cases overall improved, results.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
150,862
2103.08308
Machine Learning for Massive Industrial Internet of Things
Industrial Internet of Things (IIoT) revolutionizes the future manufacturing facilities by integrating the Internet of Things technologies into industrial settings. With the deployment of massive IIoT devices, it is difficult for the wireless network to support the ubiquitous connections with diverse quality-of-service (QoS) requirements. Although machine learning is regarded as a powerful data-driven tool to optimize wireless network, how to apply machine learning to deal with the massive IIoT problems with unique characteristics remains unsolved. In this paper, we first summarize the QoS requirements of the typical massive non-critical and critical IIoT use cases. We then identify unique characteristics in the massive IIoT scenario, and the corresponding machine learning solutions with its limitations and potential research directions. We further present the existing machine learning solutions for individual layer and cross-layer problems in massive IIoT. Last but not the least, we present a case study of massive access problem based on deep neural network and deep reinforcement learning techniques, respectively, to validate the effectiveness of machine learning in massive IIoT scenario.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
224,866
2103.08735
Joint Satellite Gateway Deployment & Controller Placement in Software-Defined 5G-Satellite Integrated Networks
Several challenging optimization problems arise while considering the deployment of the space-air-ground integrated networks (SAGINs), among which the optimal satellite gateway deployment problem is of significant importance. Moreover, with the increasing interest in the software-defined integration of 5G networks and satellites, the existence of an effective scheme for optimal placement of SDN controllers is essential. In this paper, we discuss the interrelation between the two problems above and propose suitable methods to solve them under various network design criteria. We first provide a MILP model for solving the joint problem, and then motivate the decomposition of the model into two disjoint MILPs. We then show that the resulting problems can be modeled as the optimization of submodular set functions and can be solved efficiently with provable optimality gaps.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
224,970
2003.08429
STEm-Seg: Spatio-temporal Embeddings for Instance Segmentation in Videos
Existing methods for instance segmentation in videos typically involve multi-stage pipelines that follow the tracking-by-detection paradigm and model a video clip as a sequence of images. Multiple networks are used to detect objects in individual frames, and then associate these detections over time. Hence, these methods are often non-end-to-end trainable and highly tailored to specific tasks. In this paper, we propose a different approach that is well-suited to a variety of tasks involving instance segmentation in videos. In particular, we model a video clip as a single 3D spatio-temporal volume, and propose a novel approach that segments and tracks instances across space and time in a single stage. Our problem formulation is centered around the idea of spatio-temporal embeddings which are trained to cluster pixels belonging to a specific object instance over an entire video clip. To this end, we introduce (i) novel mixing functions that enhance the feature representation of spatio-temporal embeddings, and (ii) a single-stage, proposal-free network that can reason about temporal context. Our network is trained end-to-end to learn spatio-temporal embeddings as well as parameters required to cluster these embeddings, thus simplifying inference. Our method achieves state-of-the-art results across multiple datasets and tasks. Code and models are available at https://github.com/sabarim/STEm-Seg.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
168,733
2502.08417
Handwritten Text Recognition: A Survey
Handwritten Text Recognition (HTR) has become an essential field within pattern recognition and machine learning, with applications spanning historical document preservation to modern data entry and accessibility solutions. The complexity of HTR lies in the high variability of handwriting, which makes it challenging to develop robust recognition systems. This survey examines the evolution of HTR models, tracing their progression from early heuristic-based approaches to contemporary state-of-the-art neural models, which leverage deep learning techniques. The scope of the field has also expanded, with models initially capable of recognizing only word-level content progressing to recent end-to-end document-level approaches. Our paper categorizes existing work into two primary levels of recognition: (1) \emph{up to line-level}, encompassing word and line recognition, and (2) \emph{beyond line-level}, addressing paragraph- and document-level challenges. We provide a unified framework that examines research methodologies, recent advances in benchmarking, key datasets in the field, and a discussion of the results reported in the literature. Finally, we identify pressing research challenges and outline promising future directions, aiming to equip researchers and practitioners with a roadmap for advancing the field.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
533,008
1810.12692
Research Issues in Mining User Behavioral Rules for Context-Aware Intelligent Mobile Applications
Context-awareness in smart mobile applications is a growing area of study, because of it's intelligence in the applications. In order to build context-aware intelligent applications, mining contextual behavioral rules of individual smartphone users utilizing their phone log data is the key. However, to mine these rules, a number of issues, such as the quality of smartphone data, understanding the relevancy of contexts, discretization of continuous contextual data, discovery of useful behavioral rules of individuals and their ordering, knowledge-based interactive post-mining for semantic understanding, and dynamic updating and management of rules according to their present behavior, are investigated. In this paper, we briefly discuss these issues and their potential solution directions for mining individuals' behavioral rules, for the purpose of building various context-aware intelligent mobile applications. We also summarize a number of real-life rule-based applications that intelligently assist individual smartphone users according to their behavioral rules in their daily activities.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
111,825
1607.00662
Unsupervised Learning of 3D Structure from Images
A key goal of computer vision is to recover the underlying 3D structure from 2D observations of the world. In this paper we learn strong deep generative models of 3D structures, and recover these structures from 3D and 2D images via probabilistic inference. We demonstrate high-quality samples and report log-likelihoods on several datasets, including ShapeNet [2], and establish the first benchmarks in the literature. We also show how these models and their inference networks can be trained end-to-end from 2D images. This demonstrates for the first time the feasibility of learning to infer 3D representations of the world in a purely unsupervised manner.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
58,123
1910.02223
A Machine Learning Analysis of the Features in Deceptive and Credible News
Fake news is a type of pervasive propaganda that spreads misinformation online, taking advantage of social media's extensive reach to manipulate public perception. Over the past three years, fake news has become a focal discussion point in the media due to its impact on the 2016 U.S. presidential election. Fake news can have severe real-world implications: in 2016, a man walked into a pizzeria carrying a rifle because he read that Hillary Clinton was harboring children as sex slaves. This project presents a high accuracy (87%) machine learning classifier that determines the validity of news based on the word distributions and specific linguistic and stylistic differences in the first few sentences of an article. This can help readers identify the validity of an article by looking for specific features in the opening lines aiding them in making informed decisions. Using a dataset of 2,107 articles from 30 different websites, this project establishes an understanding of the variations between fake and credible news by examining the model, dataset, and features. This classifier appears to use the differences in word distribution, levels of tone authenticity, and frequency of adverbs, adjectives, and nouns. The differentiation in the features of these articles can be used to improve future classifiers. This classifier can also be further applied directly to browsers as a Google Chrome extension or as a filter for social media outlets or news websites to reduce the spread of misinformation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
148,178
2102.01593
FEDZIP: A Compression Framework for Communication-Efficient Federated Learning
Federated Learning marks a turning point in the implementation of decentralized machine learning (especially deep learning) for wireless devices by protecting users' privacy and safeguarding raw data from third-party access. It assigns the learning process independently to each client. First, clients locally train a machine learning model based on local data. Next, clients transfer local updates of model weights and biases (training data) to a server. Then, the server aggregates updates (received from clients) to create a global learning model. However, the continuous transfer between clients and the server increases communication costs and is inefficient from a resource utilization perspective due to the large number of parameters (weights and biases) used by deep learning models. The cost of communication becomes a greater concern when the number of contributing clients and communication rounds increases. In this work, we propose a novel framework, FedZip, that significantly decreases the size of updates while transferring weights from the deep learning model between clients and their servers. FedZip implements Top-z sparsification, uses quantization with clustering, and implements compression with three different encoding methods. FedZip outperforms state-of-the-art compression frameworks and reaches compression rates up to 1085x, and preserves up to 99% of bandwidth and 99% of energy for clients during communication.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
218,168
1905.03416
Prioritized Inverse Kinematics: Nonsmoothness, Trajectory Existence, Task Convergence, Stability
In this paper, we study various theoretical properties of a class of prioritized inverse kinematics (PIK) solutions that can be considered as a class of (output regulation or tracking) control laws of a dynamical system with prioritized multiple outputs. We first develop tools to investigate nonsmoothness of PIK solutions and find a sufficient condition for nonsmoothness. It implies that existence and uniqueness of a joint trajectory satisfying a PIK solution cannot be guaranteed by the classical theorems. So, we construct an alternative existence and uniqueness theorem that uses structural information of PIK solutions. Then, we narrow the class of PIK solutions down to the case that all tasks are designed to follow some desired task trajectories and discover a few properties related to task convergence. The study goes further to analyze stability of equilibrium points of the differential equation whose right hand side is a PIK solution when all tasks are designed to reach some desired task positions. Finally, we furnish an example with a two-link manipulator that shows how our findings can be used to analyze the behavior of a joint trajectory generated from a PIK solution.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
130,193
1601.04373
Rate Maximization of Decode-and-Forward Relaying Systems with RF Energy Harvesting
We consider a three-node decode-and-forward (DF) half-duplex relaying system, where the source first harvests RF energy from the relay, and then uses this energy to transmit information to the destination via the relay. We assume that the information transfer and wireless power transfer phases alternate over time in the same frequency band, and their {\it time fraction} (TF) may change or be fixed from one transmission epoch (fading state) to the next. For this system, we maximize the achievable average data rate. Thereby, we propose two schemes: (1) jointly optimal power and TF allocation, and (2) optimal power allocation with fixed TF. Due to the small amounts of harvested power at the source, the two schemes achieve similar information rates, but yield significant performance gains compared to a benchmark system with fixed power and fixed TF allocation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
51,012
2305.18304
Semantic-aware Digital Twin for Metaverse: A Comprehensive Review
To facilitate the deployment of digital twins in Metaverse, the paradigm with semantic awareness has been proposed as a means for enabling accurate and task-oriented information extraction with inherent intelligence. However, this framework requires all devices in the Metaverse environment to be directly linked with the semantic model to enable faithful interpretation of messages. In contrast, this article introduces the digital twin framework, considering a smart industrial application, which enables semantic communication in conjugation with the Metaverse enabling technologies. The fundamentals of this framework are demonstrated on an industrial shopfloor management use case with a digital twin so as to improve its performance through semantic communication. An overview of semantic communication, Metaverse, and digital twins is presented. Integration of these technologies with the basic architecture as well as the impact on future industrial applications is presented. In a nutshell, this article showcases how semantic awareness can be an effective candidate in the implementation of digital twins for Metaverse applications.
false
false
false
false
false
true
false
false
true
false
false
false
false
true
false
false
false
true
368,946
2005.11064
Human-Like Decision Making for Autonomous Driving: A Noncooperative Game Theoretic Approach
Considering that human-driven vehicles and autonomous vehicles (AVs) will coexist on roads in the future for a long time, how to merge AVs into human drivers traffic ecology and minimize the effect of AVs and their misfit with human drivers, are issues worthy of consideration. Moreover, different passengers have different needs for AVs, thus, how to provide personalized choices for different passengers is another issue for AVs. Therefore, a human-like decision making framework is designed for AVs in this paper. Different driving styles and social interaction characteristics are formulated for AVs regarding driving safety, ride comfort and travel efficiency, which are considered in the modeling process of decision making. Then, Nash equilibrium and Stackelberg game theory are applied to the noncooperative decision making. In addition, potential field method and model predictive control (MPC) are combined to deal with the motion prediction and planning for AVs, which provides predicted motion information for the decision-making module. Finally, two typical testing scenarios of lane change, i.e., merging and overtaking, are carried out to evaluate the feasibility and effectiveness of the proposed decision-making framework considering different human-like behaviors. Testing results indicate that both the two game theoretic approaches can provide reasonable human-like decision making for AVs. Compared with the Nash equilibrium approach, under the normal driving style, the cost value of decision making using the Stackelberg game theoretic approach is reduced by over 20%.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
178,371
2006.03762
Deep Octree-based CNNs with Output-Guided Skip Connections for 3D Shape and Scene Completion
Acquiring complete and clean 3D shape and scene data is challenging due to geometric occlusion and insufficient views during 3D capturing. We present a simple yet effective deep learning approach for completing the input noisy and incomplete shapes or scenes. Our network is built upon the octree-based CNNs (O-CNN) with U-Net like structures, which enjoys high computational and memory efficiency and supports to construct a very deep network structure for 3D CNNs. A novel output-guided skip-connection is introduced to the network structure for better preserving the input geometry and learning geometry prior from data effectively. We show that with these simple adaptions -- output-guided skip-connection and deeper O-CNN (up to 70 layers), our network achieves state-of-the-art results in 3D shape completion and semantic scene computation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
180,427
2502.06559
Can We Trust AI Benchmarks? An Interdisciplinary Review of Current Issues in AI Evaluation
Quantitative Artificial Intelligence (AI) Benchmarks have emerged as fundamental tools for evaluating the performance, capability, and safety of AI models and systems. Currently, they shape the direction of AI development and are playing an increasingly prominent role in regulatory frameworks. As their influence grows, however, so too does concerns about how and with what effects they evaluate highly sensitive topics such as capabilities, including high-impact capabilities, safety and systemic risks. This paper presents an interdisciplinary meta-review of about 100 studies that discuss shortcomings in quantitative benchmarking practices, published in the last 10 years. It brings together many fine-grained issues in the design and application of benchmarks (such as biases in dataset creation, inadequate documentation, data contamination, and failures to distinguish signal from noise) with broader sociotechnical issues (such as an over-focus on evaluating text-based AI models according to one-time testing logic that fails to account for how AI models are increasingly multimodal and interact with humans and other technical systems). Our review also highlights a series of systemic flaws in current benchmarking practices, such as misaligned incentives, construct validity issues, unknown unknowns, and problems with the gaming of benchmark results. Furthermore, it underscores how benchmark practices are fundamentally shaped by cultural, commercial and competitive dynamics that often prioritise state-of-the-art performance at the expense of broader societal concerns. By providing an overview of risks associated with existing benchmarking procedures, we problematise disproportionate trust placed in benchmarks and contribute to ongoing efforts to improve the accountability and relevance of quantitative AI benchmarks within the complexities of real-world scenarios.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
532,125
2009.08614
Reinforcement Learning for Weakly Supervised Temporal Grounding of Natural Language in Untrimmed Videos
Temporal grounding of natural language in untrimmed videos is a fundamental yet challenging multimedia task facilitating cross-media visual content retrieval. We focus on the weakly supervised setting of this task that merely accesses to coarse video-level language description annotation without temporal boundary, which is more consistent with reality as such weak labels are more readily available in practice. In this paper, we propose a \emph{Boundary Adaptive Refinement} (BAR) framework that resorts to reinforcement learning (RL) to guide the process of progressively refining the temporal boundary. To the best of our knowledge, we offer the first attempt to extend RL to temporal localization task with weak supervision. As it is non-trivial to obtain a straightforward reward function in the absence of pairwise granular boundary-query annotations, a cross-modal alignment evaluator is crafted to measure the alignment degree of segment-query pair to provide tailor-designed rewards. This refinement scheme completely abandons traditional sliding window based solution pattern and contributes to acquiring more efficient, boundary-flexible and content-aware grounding results. Extensive experiments on two public benchmarks Charades-STA and ActivityNet demonstrate that BAR outperforms the state-of-the-art weakly-supervised method and even beats some competitive fully-supervised ones.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
196,292
2208.14326
GaitFi: Robust Device-Free Human Identification via WiFi and Vision Multimodal Learning
As an important biomarker for human identification, human gait can be collected at a distance by passive sensors without subject cooperation, which plays an essential role in crime prevention, security detection and other human identification applications. At present, most research works are based on cameras and computer vision techniques to perform gait recognition. However, vision-based methods are not reliable when confronting poor illuminations, leading to degrading performances. In this paper, we propose a novel multimodal gait recognition method, namely GaitFi, which leverages WiFi signals and videos for human identification. In GaitFi, Channel State Information (CSI) that reflects the multi-path propagation of WiFi is collected to capture human gaits, while videos are captured by cameras. To learn robust gait information, we propose a Lightweight Residual Convolution Network (LRCN) as the backbone network, and further propose the two-stream GaitFi by integrating WiFi and vision features for the gait retrieval task. The GaitFi is trained by the triplet loss and classification loss on different levels of features. Extensive experiments are conducted in the real world, which demonstrates that the GaitFi outperforms state-of-the-art gait recognition methods based on single WiFi or camera, achieving 94.2% for human identification tasks of 12 subjects.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
315,288
2006.12714
On Compression Principle and Bayesian Optimization for Neural Networks
Finding methods for making generalizable predictions is a fundamental problem of machine learning. By looking into similarities between the prediction problem for unknown data and the lossless compression we have found an approach that gives a solution. In this paper we propose a compression principle that states that an optimal predictive model is the one that minimizes a total compressed message length of all data and model definition while guarantees decodability. Following the compression principle we use Bayesian approach to build probabilistic models of data and network definitions. A method to approximate Bayesian integrals using a sequence of variational approximations is implemented as an optimizer for hyper-parameters: Bayesian Stochastic Gradient Descent (BSGD). Training with BSGD is completely defined by setting only three parameters: number of epochs, the size of the dataset and the size of the minibatch, which define a learning rate and a number of iterations. We show that dropout can be used for a continuous dimensionality reduction that allows to find optimal network dimensions as required by the compression principle.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
183,683
2105.09016
E(n) Equivariant Normalizing Flows
This paper introduces a generative model equivariant to Euclidean symmetries: E(n) Equivariant Normalizing Flows (E-NFs). To construct E-NFs, we take the discriminative E(n) graph neural networks and integrate them as a differential equation to obtain an invertible equivariant function: a continuous-time normalizing flow. We demonstrate that E-NFs considerably outperform baselines and existing methods from the literature on particle systems such as DW4 and LJ13, and on molecules from QM9 in terms of log-likelihood. To the best of our knowledge, this is the first flow that jointly generates molecule features and positions in 3D.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
235,943
2010.11491
Overview of Networked Supervisory Control with Imperfect Communication Channels
This paper presents an overview of the networked supervisory control framework for discrete event systems with imperfect communication networks, which can be divided into the centralized supervisory control setup and the decentralized supervisory control setup. We review the state-of-art networked control frameworks with observation channel delays and control channel delays, for untimed and timed models. Data losses in communication channels are also considered. The review of the state-of-art networked control frameworks will be focused on the following parts: 1) the construction of the networked control closed-loop system 2) the condition to ensure the existence of a networked supervisor 3) the synthesis procedure for networked-delay resilient supervisor 4) the possibility of improving the synthesis efficiency.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
202,278
2412.10982
MedG-KRP: Medical Graph Knowledge Representation Probing
Large language models (LLMs) have recently emerged as powerful tools, finding many medical applications. LLMs' ability to coalesce vast amounts of information from many sources to generate a response-a process similar to that of a human expert-has led many to see potential in deploying LLMs for clinical use. However, medicine is a setting where accurate reasoning is paramount. Many researchers are questioning the effectiveness of multiple choice question answering (MCQA) benchmarks, frequently used to test LLMs. Researchers and clinicians alike must have complete confidence in LLMs' abilities for them to be deployed in a medical setting. To address this need for understanding, we introduce a knowledge graph (KG)-based method to evaluate the biomedical reasoning abilities of LLMs. Essentially, we map how LLMs link medical concepts in order to better understand how they reason. We test GPT-4, Llama3-70b, and PalmyraMed-70b, a specialized medical model. We enlist a panel of medical students to review a total of 60 LLM-generated graphs and compare these graphs to BIOS, a large biomedical KG. We observe GPT-4 to perform best in our human review but worst in our ground truth comparison; vice-versa with PalmyraMed, the medical model. Our work provides a means of visualizing the medical reasoning pathways of LLMs so they can be implemented in clinical settings safely and effectively.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
517,200
1907.12047
A difficulty ranking approach to personalization in E-learning
The prevalence of e-learning systems and on-line courses has made educational material widely accessible to students of varying abilities and backgrounds. There is thus a growing need to accommodate for individual differences in e-learning systems. This paper presents an algorithm called EduRank for personalizing educational content to students that combines a collaborative filtering algorithm with voting methods. EduRank constructs a difficulty ranking for each student by aggregating the rankings of similar students using different aspects of their performance on common questions. These aspects include grades, number of retries, and time spent solving questions. It infers a difficulty ranking directly over the questions for each student, rather than ordering them according to the student's predicted score. The EduRank algorithm was tested on two data sets containing thousands of students and a million records. It was able to outperform the state-of-the-art ranking approaches as well as a domain expert. EduRank was used by students in a classroom activity, where a prior model was incorporated to predict the difficulty rankings of students with no prior history in the system. It was shown to lead students to solve more difficult questions than an ordering by a domain expert, without reducing their performance.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
140,021
2110.00645
How To Not Drive: Learning Driving Constraints from Demonstration
We propose a new scheme to learn motion planning constraints from human driving trajectories. Behavioral and motion planning are the key components in an autonomous driving system. The behavioral planning is responsible for high-level decision making required to follow traffic rules and interact with other road participants. The motion planner role is to generate feasible, safe trajectories for a self-driving vehicle to follow. The trajectories are generated through an optimization scheme to optimize a cost function based on metrics related to smoothness, movability, and comfort, and subject to a set of constraints derived from the planned behavior, safety considerations, and feasibility. A common practice is to manually design the cost function and constraints. Recent work has investigated learning the cost function from human driving demonstrations. While effective, the practical application of such approaches is still questionable in autonomous driving. In contrast, this paper focuses on learning driving constraints, which can be used as an add-on module to existing autonomous driving solutions. To learn the constraint, the planning problem is formulated as a constrained Markov Decision Process, whose elements are assumed to be known except the constraints. The constraints are then learned by learning the distribution of expert trajectories and estimating the probability of optimal trajectories belonging to the learned distribution. The proposed scheme is evaluated using NGSIM dataset, yielding less than 1\% collision rate and out of road maneuvers when the learned constraints is used in an optimization-based motion planner.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
258,470
1403.3298
The role of network embeddedness on the selection of collaboration partners: An agent-based model with empirical validation
We use a data-driven agent-based model to study the core-periphery structure of two collaboration networks, R&D alliances between firms and co-authorship relations between scientists. To characterize the network embeddedness of agents, we introduce a coreness value, obtained from a weighted $k$-core decomposition. We study the change of these coreness values when collaborations with newcomers or established agents are formed. Our agent-based model is able to reproduce the empirical coreness differences of collaboration partners and to explain why we observe a change in partner selection for agents with high network embeddedness.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
31,556
1307.7770
A Connection between Good Rate-distortion Codes and Backward DMCs
Let $X^n\in\mathcal{X}^n$ be a sequence drawn from a discrete memoryless source, and let $Y^n\in\mathcal{Y}^n$ be the corresponding reconstruction sequence that is output by a good rate-distortion code. This paper establishes a property of the joint distribution of $(X^n,Y^n)$. It is shown that for $D>0$, the input-output statistics of a $R(D)$-achieving rate-distortion code converge (in normalized relative entropy) to the output-input statistics of a discrete memoryless channel (dmc). The dmc is "backward" in that it is a channel from the reconstruction space $\mathcal{Y}^n$ to source space $\mathcal{X}^n$. It is also shown that the property does not necessarily hold when normalized relative entropy is replaced by variational distance.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
26,128
2002.00583
CoTK: An Open-Source Toolkit for Fast Development and Fair Evaluation of Text Generation
In text generation evaluation, many practical issues, such as inconsistent experimental settings and metric implementations, are often ignored but lead to unfair evaluation and untenable conclusions. We present CoTK, an open-source toolkit aiming to support fast development and fair evaluation of text generation. In model development, CoTK helps handle the cumbersome issues, such as data processing, metric implementation, and reproduction. It standardizes the development steps and reduces human errors which may lead to inconsistent experimental settings. In model evaluation, CoTK provides implementation for many commonly used metrics and benchmark models across different experimental settings. As a unique feature, CoTK can signify when and which metric cannot be fairly compared. We demonstrate that it is convenient to use CoTK for model development and evaluation, particularly across different experimental settings.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
162,403
2405.11658
A Starting Point for Dynamic Community Detection with Leiden Algorithm
Real-world graphs often evolve over time, making community or cluster detection a crucial task. In this technical report, we extend three dynamic approaches - Naive-dynamic (ND), Delta-screening (DS), and Dynamic Frontier (DF) - to our multicore implementation of the Leiden algorithm, known for its high-quality community detection. Our experiments, conducted on a server with a 64-core AMD EPYC-7742 processor, show that ND, DS, and DF Leiden achieve average speedups of 1.37x, 1.47x, and 1.98x on large graphs with random batch updates, compared to the Static Leiden algorithm - while scaling at a rate of 1.6x for every doubling of threads. To our knowledge, this is the first attempt to apply dynamic approaches to the Leiden algorithm. We hope these early results pave the way for further development of dynamic approaches for evolving graphs.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
455,229
2112.13681
Wholesale Electricity Price Forecasting using Integrated Long-term Recurrent Convolutional Network Model
Electricity price is a key factor affecting the decision-making for all market participants. Accurate forecasting of electricity prices is very important and is also very challenging since electricity price is highly volatile due to various factors. This paper proposes an integrated long-term recurrent convolutional network (ILRCN) model to predict electricity prices considering the majority contributing attributes to the market price as input. The proposed ILRCN model combines the functionalities of convolutional neural network and long short-term memory (LSTM) algorithm along with the proposed novel conditional error correction term. The combined ILRCN model can identify the linear and non-linear behavior within the input data. We have used ERCOT wholesale market price data along with load profile, temperature, and other factors for the Houston region to illustrate the proposed model. The performance of the proposed ILRCN electricity price forecasting model is verified using performance/evaluation metrics like mean absolute error and accuracy. Case studies reveal that the proposed ILRCN model is accurate and efficient in electricity price forecasting as compared to the support vector machine (SVM) model, fully-connected neural network model, LSTM model and the LRCN model without the conditional error correction stage.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
273,326
2410.05759
3D UAV Trajectory Planning for IoT Data Collection via Matrix-Based Evolutionary Computation
UAVs are increasingly becoming vital tools in various wireless communication applications including internet of things (IoT) and sensor networks, thanks to their rapid and agile non-terrestrial mobility. Despite recent research, planning three-dimensional (3D) UAV trajectories over a continuous temporal-spatial domain remains challenging due to the need to solve computationally intensive optimization problems. In this paper, we study UAV-assisted IoT data collection aimed at minimizing total energy consumption while accounting for the UAV's physical capabilities, the heterogeneous data demands of IoT nodes, and 3D terrain. We propose a matrix-based differential evolution with constraint handling (MDE-CH), a computation-efficient evolutionary algorithm designed to address non-convex constrained optimization problems with several different types of constraints. Numerical evaluations demonstrate that the proposed MDE-CH algorithm provides a continuous 3D temporal-spatial UAV trajectory capable of efficiently minimizing energy consumption under various practical constraints and outperforms the conventional fly-hover-fly model for both two-dimensional (2D) and 3D trajectory planning.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
495,898
1912.03787
Getting Topology and Point Cloud Generation to Mesh
In this work, we explore the idea that effective generative models for point clouds under the autoencoding framework must acknowledge the relationship between a continuous surface, a discretized mesh, and a set of points sampled from the surface. This view motivates a generative model that works by progressively deforming a uniform sphere until it approximates the goal point cloud. We review the underlying concepts leading to this conclusion from computer graphics and topology in differential geometry, and model the generation process as deformation via deep neural network parameterization. Finally, we show that this view of the problem produces a model that can generate quality meshes efficiently.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
156,688
2410.01695
From Prohibition to Adoption: How Hong Kong Universities Are Navigating ChatGPT in Academic Workflows
This paper aims at comparing the time when Hong Kong universities used to ban ChatGPT to the current periods where it has become integrated in the academic processes. Bolted by concerns of integrity and ethical issues in technologies, institutions have adapted by moving towards the center adopting AI literacy and responsibility policies. This study examines new paradigms which have been developed to help implement these positives while preventing negative effects on academia. Keywords: ChatGPT, Academic Integrity, AI Literacy, Ethical AI Use, Generative AI in Education, University Policy, AI Integration in Academia, Higher Education and Technology
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
493,896
2103.15912
Data Augmentation in a Hybrid Approach for Aspect-Based Sentiment Analysis
Data augmentation is a way to increase the diversity of available data by applying constrained transformations on the original data. This strategy has been widely used in image classification but has to the best of our knowledge not yet been used in aspect-based sentiment analysis (ABSA). ABSA is a text analysis technique that determines aspects and their associated sentiment in opinionated text. In this paper, we investigate the effect of data augmentation on a state-of-the-art hybrid approach for aspect-based sentiment analysis (HAABSA). We apply modified versions of easy data augmentation (EDA), backtranslation, and word mixup. We evaluate the proposed techniques on the SemEval 2015 and SemEval 2016 datasets. The best result is obtained with the adjusted version of EDA, which yields a 0.5 percentage point improvement on the SemEval 2016 dataset and 1 percentage point increase on the SemEval 2015 dataset compared to the original HAABSA model.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
227,369
2307.10198
Has China caught up to the US in AI research? An exploration of mimetic isomorphism as a model for late industrializers
Artificial Intelligence (AI), a cornerstone of 21st-century technology, has seen remarkable growth in China. In this paper, we examine China's AI development process, demonstrating that it is characterized by rapid learning and differentiation, surpassing the export-oriented growth propelled by Foreign Direct Investment seen in earlier Asian industrializers. Our data indicates that China currently leads the USA in the volume of AI-related research papers. However, when we delve into the quality of these papers based on specific metrics, the USA retains a slight edge. Nevertheless, the pace and scale of China's AI development remain noteworthy. We attribute China's accelerated AI progress to several factors, including global trends favoring open access to algorithms and research papers, contributions from China's broad diaspora and returnees, and relatively lax data protection policies. In the vein of our research, we have developed a novel measure for gauging China's imitation of US research. Our analysis shows that by 2018, the time lag between China and the USA in addressing AI research topics had evaporated. This finding suggests that China has effectively bridged a significant knowledge gap and could potentially be setting out on an independent research trajectory. While this study compares China and the USA exclusively, it's important to note that research collaborations between these two nations have resulted in more highly cited work than those produced by either country independently. This underscores the power of international cooperation in driving scientific progress in AI.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
380,457
2206.10129
Automatic Concept Extraction for Concept Bottleneck-based Video Classification
Recent efforts in interpretable deep learning models have shown that concept-based explanation methods achieve competitive accuracy with standard end-to-end models and enable reasoning and intervention about extracted high-level visual concepts from images, e.g., identifying the wing color and beak length for bird-species classification. However, these concept bottleneck models rely on a necessary and sufficient set of predefined concepts-which is intractable for complex tasks such as video classification. For complex tasks, the labels and the relationship between visual elements span many frames, e.g., identifying a bird flying or catching prey-necessitating concepts with various levels of abstraction. To this end, we present CoDEx, an automatic Concept Discovery and Extraction module that rigorously composes a necessary and sufficient set of concept abstractions for concept-based video classification. CoDEx identifies a rich set of complex concept abstractions from natural language explanations of videos-obviating the need to predefine the amorphous set of concepts. To demonstrate our method's viability, we construct two new public datasets that combine existing complex video classification datasets with short, crowd-sourced natural language explanations for their labels. Our method elicits inherent complex concept abstractions in natural language to generalize concept-bottleneck methods to complex tasks.
false
false
false
false
false
true
true
false
false
false
false
true
false
false
false
false
false
false
303,806
2409.04808
HULLMI: Human vs LLM identification with explainability
As LLMs become increasingly proficient at producing human-like responses, there has been a rise of academic and industrial pursuits dedicated to flagging a given piece of text as "human" or "AI". Most of these pursuits involve modern NLP detectors like T5-Sentinel and RoBERTa-Sentinel, without paying too much attention to issues of interpretability and explainability of these models. In our study, we provide a comprehensive analysis that shows that traditional ML models (Naive-Bayes,MLP, Random Forests, XGBoost) perform as well as modern NLP detectors, in human vs AI text detection. We achieve this by implementing a robust testing procedure on diverse datasets, including curated corpora and real-world samples. Subsequently, by employing the explainable AI technique LIME, we uncover parts of the input that contribute most to the prediction of each model, providing insights into the detection process. Our study contributes to the growing need for developing production-level LLM detection tools, which can leverage a wide range of traditional as well as modern NLP detectors we propose. Finally, the LIME techniques we demonstrate also have the potential to equip these detection tools with interpretability analysis features, making them more reliable and trustworthy in various domains like education, healthcare, and media.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
486,517
1705.06211
An Investigation of Newton-Sketch and Subsampled Newton Methods
Sketching, a dimensionality reduction technique, has received much attention in the statistics community. In this paper, we study sketching in the context of Newton's method for solving finite-sum optimization problems in which the number of variables and data points are both large. We study two forms of sketching that perform dimensionality reduction in data space: Hessian subsampling and randomized Hadamard transformations. Each has its own advantages, and their relative tradeoffs have not been investigated in the optimization literature. Our study focuses on practical versions of the two methods in which the resulting linear systems of equations are solved approximately, at every iteration, using an iterative solver. The advantages of using the conjugate gradient method vs. a stochastic gradient iteration are revealed through a set of numerical experiments, and a complexity analysis of the Hessian subsampling method is presented.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
73,606
1705.09597
Extracting 3D Vascular Structures from Microscopy Images using Convolutional Recurrent Networks
Vasculature is known to be of key biological significance, especially in the study of cancer. As such, considerable effort has been focused on the automated measurement and analysis of vasculature in medical and pre-clinical images. In tumors in particular, the vascular networks may be extremely irregular and the appearance of the individual vessels may not conform to classical descriptions of vascular appearance. Typically, vessels are extracted by either a segmentation and thinning pipeline, or by direct tracking. Neither of these methods are well suited to microscopy images of tumor vasculature. In order to address this we propose a method to directly extract a medial representation of the vessels using Convolutional Neural Networks. We then show that these two-dimensional centerlines can be meaningfully extended into 3D in anisotropic and complex microscopy images using the recently popularized Convolutional Long Short-Term Memory units (ConvLSTM). We demonstrate the effectiveness of this hybrid convolutional-recurrent architecture over both 2D and 3D convolutional comparators.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
74,231
2302.05046
Information-Theoretical Approach to Integrated Pulse-Doppler Radar and Communication Systems
Integrated sensing and communication improves the design of systems by combining sensing and communication functions for increased efficiency, accuracy, and cost savings. The optimal integration requires understanding the trade-off between sensing and communication, but this can be difficult due to the lack of unified performance metrics. In this paper, an information-theoretical approach is used to design the system with a unified metric. A sensing rate is introduced to measure the amount of information obtained by a pulse-Doppler radar system. An approximation and lower bound of the sensing rate is obtained in closed forms. Using both the derived sensing information and communication rates, the optimal bandwidth allocation strategy is found for maximizing the weighted sum of the spectral efficiency for sensing and communication. The simulation results confirm the validity of the approximation and the effectiveness of the proposed bandwidth allocation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
344,912
1807.10363
Message-passing neural networks for high-throughput polymer screening
Machine learning methods have shown promise in predicting molecular properties, and given sufficient training data machine learning approaches can enable rapid high-throughput virtual screening of large libraries of compounds. Graph-based neural network architectures have emerged in recent years as the most successful approach for predictions based on molecular structure, and have consistently achieved the best performance on benchmark quantum chemical datasets. However, these models have typically required optimized 3D structural information for the molecule to achieve the highest accuracy. These 3D geometries are costly to compute for high levels of theory, limiting the applicability and practicality of machine learning methods in high-throughput screening applications. In this study, we present a new database of candidate molecules for organic photovoltaic applications, comprising approximately 91,000 unique chemical structures.Compared to existing datasets, this dataset contains substantially larger molecules (up to 200 atoms) as well as extrapolated properties for long polymer chains. We show that message-passing neural networks trained with and without 3D structural information for these molecules achieve similar accuracy, comparable to state-of-the-art methods on existing benchmark datasets. These results therefore emphasize that for larger molecules with practical applications, near-optimal prediction results can be obtained without using optimized 3D geometry as an input. We further show that learned molecular representations can be leveraged to reduce the training data required to transfer predictions to a new DFT functional.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
103,920
2104.01305
Nova-LSM: A Distributed, Component-based LSM-tree Key-value Store
The cloud infrastructure motivates disaggregation of monolithic data stores into components that are assembled together based on an application's workload. This study investigates disaggregation of an LSM-tree key-value store into components that communicate using RDMA. These components separate storage from processing, enabling processing components to share storage bandwidth and space. The processing components scatter blocks of a file (SSTable) across an arbitrary number of storage components and balance load across them using power-of-d. They construct ranges dynamically at runtime to parallelize compaction and enhance performance. Each component has configuration knobs that control its scalability. The resulting component-based system, Nova-LSM, is elastic. It outperforms its monolithic counterparts, both LevelDB and RocksDB, by several orders of magnitude with workloads that exhibit a skewed pattern of access to data.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
228,300
2306.09775
Using Machine Learning Methods for Automation of Size Grid Building and Management
Fashion apparel companies require planning for the next season, a year in advance for supply chain management. This study focuses on size selection decision making for Levi Strauss. Currently, the region and planning group level size grids are built and managed manually. The company suffers from the workload it creates for sizing, merchant and planning teams. This research is aiming to answer two research questions: "Which sizes should be available to the planners under each size grid name for the next season(s)?" and "Which sizes should be adopted for each planning group for the next season(s)?". We approach to the problem with a classification model, which is one of the popular models used in machine learning. With this research, a more automated process was created by using machine learning techniques. A decrease in workload of the teams in the company is expected after it is put into practice. Unlike many studies in the state of art for fashion and apparel industry, this study focuses on sizes where the stock keeping unit represents a product with a certain size.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
373,957
1305.2265
Quality Measures of Parameter Tuning for Aggregated Multi-Objective Temporal Planning
Parameter tuning is recognized today as a crucial ingredient when tackling an optimization problem. Several meta-optimization methods have been proposed to find the best parameter set for a given optimization algorithm and (set of) problem instances. When the objective of the optimization is some scalar quality of the solution given by the target algorithm, this quality is also used as the basis for the quality of parameter sets. But in the case of multi-objective optimization by aggregation, the set of solutions is given by several single-objective runs with different weights on the objectives, and it turns out that the hypervolume of the final population of each single-objective run might be a better indicator of the global performance of the aggregation method than the best fitness in its population. This paper discusses this issue on a case study in multi-objective temporal planning using the evolutionary planner DaE-YAHSP and the meta-optimizer ParamILS. The results clearly show how ParamILS makes a difference between both approaches, and demonstrate that indeed, in this context, using the hypervolume indicator as ParamILS target is the best choice. Other issues pertaining to parameter tuning in the proposed context are also discussed.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
24,502
2108.07927
Fed-TGAN: Federated Learning Framework for Synthesizing Tabular Data
Generative Adversarial Networks (GANs) are typically trained to synthesize data, from images and more recently tabular data, under the assumption of directly accessible training data. Recently, federated learning (FL) is an emerging paradigm that features decentralized learning on client's local data with a privacy-preserving capability. And, while learning GANs to synthesize images on FL systems has just been demonstrated, it is unknown if GANs for tabular data can be learned from decentralized data sources. Moreover, it remains unclear which distributed architecture suits them best. Different from image GANs, state-of-the-art tabular GANs require prior knowledge on the data distribution of each (discrete and continuous) column to agree on a common encoding -- risking privacy guarantees. In this paper, we propose Fed-TGAN, the first Federated learning framework for Tabular GANs. To effectively learn a complex tabular GAN on non-identical participants, Fed-TGAN designs two novel features: (i) a privacy-preserving multi-source feature encoding for model initialization; and (ii) table similarity aware weighting strategies to aggregate local models for countering data skew. We extensively evaluate the proposed Fed-TGAN against variants of decentralized learning architectures on four widely used datasets. Results show that Fed-TGAN accelerates training time per epoch up to 200% compared to the alternative architectures, for both IID and Non-IID data. Overall, Fed-TGAN not only stabilizes the training loss, but also achieves better similarity between generated and original data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
251,063
2103.09279
Quadratic-exponential functionals of Gaussian quantum processes
This paper is concerned with exponential moments of integral-of-quadratic functions of quantum processes with canonical commutation relations of position-momentum type. Such quadratic-exponential functionals (QEFs) arise as robust performance criteria in control problems for open quantum harmonic oscillators (OQHOs) driven by bosonic fields. We develop a randomised representation for the QEF using a Karhunen-Loeve expansion of the quantum process on a bounded time interval over the eigenbasis of its two-point commutator kernel, with noncommuting position-momentum pairs as coefficients. This representation holds regardless of a particular quantum state and employs averaging over an auxiliary classical Gaussian random process whose covariance operator is specified by the commutator kernel. This allows the QEF to be related to the moment-generating functional of the quantum process and computed for multipoint Gaussian states. For stationary Gaussian quantum processes, we establish a frequency-domain formula for the QEF rate in terms of the Fourier transform of the quantum covariance kernel in composition with trigonometric functions. A differential equation is obtained for the QEF rate with respect to the risk sensitivity parameter for its approximation and numerical computation. The QEF is also applied to large deviations and worst-case mean square cost bounds for OQHOs in the presence of statistical uncertainty with a quantum relative entropy description.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
225,121
2103.16051
Reduced Dynamics and Control for an Autonomous Bicycle
In this paper, we propose the reduced model for the full dynamics of a bicycle and analyze its nonlinear behavior under a proportional control law for steering. Based on the Gibbs-Appell equations for the Whipple bicycle, we obtain a second-order nonlinear ordinary differential equation (ODE) that governs the bicycle's controlled motion. Two types of equilibrium points for the governing equation are found, which correspond to the bicycle's uniform straight forward and circular motions, respectively. By applying the Hurwitz criterion to the linearized equation, we find that the steer coefficient must be negative, consistent with the human's intuition of turning toward a fall. Under this condition, a critical angular velocity of the rear wheel exists, above which the uniform straight forward motion is stable, and slightly below which a pair of symmetrical stable uniform circular motions will occur. These theoretical findings are verified by both numerical simulations and experiments performed on a powered autonomous bicycle.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
227,436
2102.02885
Adversarial Robustness Study of Convolutional Neural Network for Lumbar Disk Shape Reconstruction from MR images
Machine learning technologies using deep neural networks (DNNs), especially convolutional neural networks (CNNs), have made automated, accurate, and fast medical image analysis a reality for many applications, and some DNN-based medical image analysis systems have even been FDA-cleared. Despite the progress, challenges remain to build DNNs as reliable as human expert doctors. It is known that DNN classifiers may not be robust to noises: by adding a small amount of noise to an input image, a DNN classifier may make a wrong classification of the noisy image (i.e., in-distribution adversarial sample), whereas it makes the right classification of the clean image. Another issue is caused by out-of-distribution samples that are not similar to any sample in the training set. Given such a sample as input, the output of a DNN will become meaningless. In this study, we investigated the in-distribution (IND) and out-of-distribution (OOD) adversarial robustness of a representative CNN for lumbar disk shape reconstruction from spine MR images. To study the relationship between dataset size and robustness to IND adversarial attacks, we used a data augmentation method to create training sets with different levels of shape variations. We utilized the PGD-based algorithm for IND adversarial attacks and extended it for OOD adversarial attacks to generate OOD adversarial samples for model testing. The results show that IND adversarial training can improve the CNN robustness to IND adversarial attacks, and larger training datasets may lead to higher IND robustness. However, it is still a challenge to defend against OOD adversarial attacks.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
218,547
2410.09489
Towards Efficient Visual-Language Alignment of the Q-Former for Visual Reasoning Tasks
Recent advancements in large language models have demonstrated enhanced capabilities in visual reasoning tasks by employing additional encoders for aligning different modalities. While the Q-Former has been widely used as a general encoder for aligning several modalities including image, video, audio, and 3D with large language models, previous works on its efficient training and the analysis of its individual components have been limited. In this work, we investigate the effectiveness of parameter efficient fine-tuning (PEFT) the Q-Former using InstructBLIP with visual reasoning benchmarks ScienceQA and IconQA. We observe that applying PEFT to the Q-Former achieves comparable performance to full fine-tuning using under 2% of the trainable parameters. Additionally, we employ AdaLoRA for dynamic parameter budget reallocation to examine the relative importance of the Q-Former's sublayers with 4 different benchmarks. Our findings reveal that the self-attention layers are noticeably more important in perceptual visual-language reasoning tasks, and relative importance of FFN layers depends on the complexity of visual-language patterns involved in tasks. The code is available at https://github.com/AttentionX/InstructBLIP_PEFT.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
497,619
1202.3718
On the Complexity of Decision Making in Possibilistic Decision Trees
When the information about uncertainty cannot be quantified in a simple, probabilistic way, the topic of possibilistic decision theory is often a natural one to consider. The development of possibilistic decision theory has lead to a series of possibilistic criteria, e.g pessimistic possibilistic qualitative utility, possibilistic likely dominance, binary possibilistic utility and possibilistic Choquet integrals. This paper focuses on sequential decision making in possibilistic decision trees. It proposes a complexity study of the problem of finding an optimal strategy depending on the monotonicity property of the optimization criteria which allows the application of dynamic programming that offers a polytime reduction of the decision problem. It also shows that possibilistic Choquet integrals do not satisfy this property, and that in this case the optimization problem is NP - hard.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
14,390
2307.12166
The Imitation Game: Detecting Human and AI-Generated Texts in the Era of ChatGPT and BARD
The potential of artificial intelligence (AI)-based large language models (LLMs) holds considerable promise in revolutionizing education, research, and practice. However, distinguishing between human-written and AI-generated text has become a significant task. This paper presents a comparative study, introducing a novel dataset of human-written and LLM-generated texts in different genres: essays, stories, poetry, and Python code. We employ several machine learning models to classify the texts. Results demonstrate the efficacy of these models in discerning between human and AI-generated text, despite the dataset's limited sample size. However, the task becomes more challenging when classifying GPT-generated text, particularly in story writing. The results indicate that the models exhibit superior performance in binary classification tasks, such as distinguishing human-generated text from a specific LLM, compared to the more complex multiclass tasks that involve discerning among human-generated and multiple LLMs. Our findings provide insightful implications for AI text detection while our dataset paves the way for future research in this evolving area.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
381,164
2407.07229
Using Galaxy Evolution as Source of Physics-Based Ground Truth for Generative Models
Generative models producing images have enormous potential to advance discoveries across scientific fields and require metrics capable of quantifying the high dimensional output. We propose that astrophysics data, such as galaxy images, can test generative models with additional physics-motivated ground truths in addition to human judgment. For example, galaxies in the Universe form and change over billions of years, following physical laws and relationships that are both easy to characterize and difficult to encode in generative models. We build a conditional denoising diffusion probabilistic model (DDPM) and a conditional variational autoencoder (CVAE) and test their ability to generate realistic galaxies conditioned on their redshifts (galaxy ages). This is one of the first studies to probe these generative models using physically motivated metrics. We find that both models produce comparable realistic galaxies based on human evaluation, but our physics-based metrics are better able to discern the strengths and weaknesses of the generative models. Overall, the DDPM model performs better than the CVAE on the majority of the physics-based metrics. Ultimately, if we can show that generative models can learn the physics of galaxy evolution, they have the potential to unlock new astrophysical discoveries.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
471,678
1802.09184
Variance Reduction Methods for Sublinear Reinforcement Learning
There is a technical issue in the analysis that is not easily fixable. We, therefore, withdraw the submission. Sorry for the inconvenience.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
91,288
2006.04680
Dimensionality Reduction for Sentiment Classification: Evolving for the Most Prominent and Separable Features
In sentiment classification, the enormous amount of textual data, its immense dimensionality, and inherent noise make it extremely difficult for machine learning classifiers to extract high-level and complex abstractions. In order to make the data less sparse and more statistically significant, the dimensionality reduction techniques are needed. But in the existing dimensionality reduction techniques, the number of components needs to be set manually which results in loss of the most prominent features, thus reducing the performance of the classifiers. Our prior work, i.e., Term Presence Count (TPC) and Term Presence Ratio (TPR) have proven to be effective techniques as they reject the less separable features. However, the most prominent and separable features might still get removed from the initial feature set despite having higher distributions among positive and negative tagged documents. To overcome this problem, we have proposed a new framework that consists of two-dimensionality reduction techniques i.e., Sentiment Term Presence Count (SentiTPC) and Sentiment Term Presence Ratio (SentiTPR). These techniques reject the features by considering term presence difference for SentiTPC and ratio of the distribution distinction for SentiTPR. Additionally, these methods also analyze the total distribution information. Extensive experimental results exhibit that the proposed framework reduces the feature dimension by a large scale, and thus significantly improve the classification performance.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
180,775
2103.11972
Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals
There has been a recent resurgence of interest in explainable artificial intelligence (XAI) that aims to reduce the opaqueness of AI-based decision-making systems, allowing humans to scrutinize and trust them. Prior work in this context has focused on the attribution of responsibility for an algorithm's decisions to its inputs wherein responsibility is typically approached as a purely associational concept. In this paper, we propose a principled causality-based approach for explaining black-box decision-making systems that addresses limitations of existing methods in XAI. At the core of our framework lies probabilistic contrastive counterfactuals, a concept that can be traced back to philosophical, cognitive, and social foundations of theories on how humans generate and select explanations. We show how such counterfactuals can quantify the direct and indirect influences of a variable on decisions made by an algorithm, and provide actionable recourse for individuals negatively affected by the algorithm's decision. Unlike prior work, our system, LEWIS: (1)can compute provably effective explanations and recourse at local, global and contextual levels (2)is designed to work with users with varying levels of background knowledge of the underlying causal model and (3)makes no assumptions about the internals of an algorithmic system except for the availability of its input-output data. We empirically evaluate LEWIS on three real-world datasets and show that it generates human-understandable explanations that improve upon state-of-the-art approaches in XAI, including the popular LIME and SHAP. Experiments on synthetic data further demonstrate the correctness of LEWIS's explanations and the scalability of its recourse algorithm.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
true
false
226,024
1910.12043
Bayesian Experimental Design for Finding Reliable Level Set under Input Uncertainty
In the manufacturing industry, it is often necessary to repeat expensive operational testing of machine in order to identify the range of input conditions under which the machine operates properly. Since it is often difficult to accurately control the input conditions during the actual usage of the machine, there is a need to guarantee the performance of the machine after properly incorporating the possible variation in input conditions. In this paper, we formulate this practical manufacturing scenario as an Input Uncertain Reliable Level Set Estimation (IU-rLSE) problem, and provide an efficient algorithm for solving it. The goal of IU-rLSE is to identify the input range in which the outputs smaller/greater than a desired threshold can be obtained with high probability when the input uncertainty is properly taken into consideration. We propose an active learning method to solve the IU-rLSE problem efficiently, theoretically analyze its accuracy and convergence, and illustrate its empirical performance through numerical experiments on artificial and real data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
150,953
1910.04814
ErrorNet: Learning error representations from limited data to improve vascular segmentation
Deep convolutional neural networks have proved effective in segmenting lesions and anatomies in various medical imaging modalities. However, in the presence of small sample size and domain shift problems, these models often produce masks with non-intuitive segmentation mistakes. In this paper, we propose a segmentation framework called ErrorNet, which learns to correct these segmentation mistakes through the repeated process of injecting systematic segmentation errors to the segmentation result based on a learned shape prior, followed by attempting to predict the injected error. During inference, ErrorNet corrects the segmentation mistakes by adding the predicted error map to the initial segmentation result. ErrorNet has advantages over alternatives based on domain adaptation or CRF-based post processing, because it requires neither domain-specific parameter tuning nor any data from the target domains. We have evaluated ErrorNet using five public datasets for the task of retinal vessel segmentation. The selected datasets differ in size and patient population, allowing us to evaluate the effectiveness of ErrorNet in handling small sample size and domain shift problems. Our experiments demonstrate that ErrorNet outperforms a base segmentation model, a CRF-based post processing scheme, and a domain adaptation method, with a greater performance gain in the presence of the aforementioned dataset limitations.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
148,866
1607.03827
The KIT Motion-Language Dataset
Linking human motion and natural language is of great interest for the generation of semantic representations of human activities as well as for the generation of robot activities based on natural language input. However, while there have been years of research in this area, no standardized and openly available dataset exists to support the development and evaluation of such systems. We therefore propose the KIT Motion-Language Dataset, which is large, open, and extensible. We aggregate data from multiple motion capture databases and include them in our dataset using a unified representation that is independent of the capture system or marker set, making it easy to work with the data regardless of its origin. To obtain motion annotations in natural language, we apply a crowd-sourcing approach and a web-based tool that was specifically build for this purpose, the Motion Annotation Tool. We thoroughly document the annotation process itself and discuss gamification methods that we used to keep annotators motivated. We further propose a novel method, perplexity-based selection, which systematically selects motions for further annotation that are either under-represented in our dataset or that have erroneous annotations. We show that our method mitigates the two aforementioned problems and ensures a systematic annotation process. We provide an in-depth analysis of the structure and contents of our resulting dataset, which, as of October 10, 2016, contains 3911 motions with a total duration of 11.23 hours and 6278 annotations in natural language that contain 52,903 words. We believe this makes our dataset an excellent choice that enables more transparent and comparable research in this important area.
false
false
false
false
false
false
true
true
true
false
false
true
false
false
false
false
false
false
58,560
1705.05935
Rise of the humanbot
The accelerated path of technological development, particularly at the interface between hardware and biology has been suggested as evidence for future major technological breakthroughs associated to our potential to overcome biological constraints. This includes the potential of becoming immortal, having expanded cognitive capacities thanks to hardware implants or the creation of intelligent machines. Here I argue that several relevant evolutionary and structural constraints might prevent achieving most (if not all) these innovations. Instead, the coming future will bring novelties that will challenge many other aspects of our life and that can be seen as other feasible singularities. One particularly important one has to do with the evolving interactions between humans and non-intelligent robots capable of learning and communication. Here I argue that a long term interaction can lead to a new class of "agent" (the humanbot). The way shared memories get tangled over time will inevitably have important consequences for both sides of the pair, whose identity as separated entities might become blurred and ultimately vanish. Understanding such hybrid systems requires a second-order neuroscience approach while posing serious conceptual challenges, including the definition of consciousness.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
73,568
2301.10105
Does Search Engine Optimization come along with high-quality content? A comparison between optimized and non-optimized health-related web pages
Searching for medical information is both a common and important activity since it influences decisions people make about their healthcare. Using search engine optimization (SEO), content producers seek to increase the visibility of their content. SEO is more likely to be practiced by commercially motivated content producers such as pharmaceutical companies than by non-commercial providers such as governmental bodies. In this study, we ask whether content quality correlates with the presence or absence of SEO measures on a web page. We conducted a user study in which N = 61 participants comprising laypeople as well as experts in health information assessment evaluated health-related web pages classified as either optimized or non-optimized. The subjects rated the expertise of non-optimized web pages as higher than the expertise of optimized pages, justifying their appraisal by the more competent and reputable appearance of non-optimized pages. In addition, comments about the website operators of the non-optimized pages were exclusively positive, while optimized pages tended to receive positive as well as negative assessments. We found no differences between the ratings of laypeople and experts. Since non-optimized, but high-quality content may be outranked by optimized content of lower quality, trusted sources should be prioritized in rankings.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
341,701
2407.10482
NGP-RT: Fusing Multi-Level Hash Features with Lightweight Attention for Real-Time Novel View Synthesis
This paper presents NGP-RT, a novel approach for enhancing the rendering speed of Instant-NGP to achieve real-time novel view synthesis. As a classic NeRF-based method, Instant-NGP stores implicit features in multi-level grids or hash tables and applies a shallow MLP to convert the implicit features into explicit colors and densities. Although it achieves fast training speed, there is still a lot of room for improvement in its rendering speed due to the per-point MLP executions for implicit multi-level feature aggregation, especially for real-time applications. To address this challenge, our proposed NGP-RT explicitly stores colors and densities as hash features, and leverages a lightweight attention mechanism to disambiguate the hash collisions instead of using computationally intensive MLP. At the rendering stage, NGP-RT incorporates a pre-computed occupancy distance grid into the ray marching strategy to inform the distance to the nearest occupied voxel, thereby reducing the number of marching points and global memory access. Experimental results show that on the challenging Mip-NeRF360 dataset, NGP-RT achieves better rendering quality than previous NeRF-based methods, achieving 108 fps at 1080p resolution on a single Nvidia RTX 3090 GPU. Our approach is promising for NeRF-based real-time applications that require efficient and high-quality rendering.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
473,006
2302.07946
Experimenting with Emerging RISC-V Systems for Decentralised Machine Learning
Decentralised Machine Learning (DML) enables collaborative machine learning without centralised input data. Federated Learning (FL) and Edge Inference are examples of DML. While tools for DML (especially FL) are starting to flourish, many are not flexible and portable enough to experiment with novel processors (e.g., RISC-V), non-fully connected network topologies, and asynchronous collaboration schemes. We overcome these limitations via a domain-specific language allowing us to map DML schemes to an underlying middleware, i.e. the FastFlow parallel programming library. We experiment with it by generating different working DML schemes on x86-64 and ARM platforms and an emerging RISC-V one. We characterise the performance and energy efficiency of the presented schemes and systems. As a byproduct, we introduce a RISC-V porting of the PyTorch framework, the first publicly available to our knowledge.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
345,876
1810.04637
Quantification of Trabeculae Inside the Heart from MRI Using Fractal Analysis
Left ventricular non-compaction (LVNC) is a rare cardiomyopathy (CMP) that should be considered as a possible diagnosis because of its potential complications which are heart failure, ventricular arrhythmias, and embolic events. For analysis cardiac functionality, extracting information from the Left ventricular (LV) is already a broad field of Medical Imaging. Different algorithms and strategies ranging that is semiautomated or automated has already been developed to get useful information from such a critical structure of heart. Trabeculae in the heart undergoes difference changes like solid from spongy. Due to failure of this process left ventricle non-compaction occurred. In this project, we will demonstrate the fractal dimension (FD) and manual segmentation of the Magnetic Resonance Imaging (MRI) of the heart that quantify amount of trabeculae inside the heart. The greater the value of fractal dimension inside the heart indicates the greater complex pattern of the trabeculae in the heart.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
110,084
2109.14796
Phonetic Word Embeddings
This work presents a novel methodology for calculating the phonetic similarity between words taking motivation from the human perception of sounds. This metric is employed to learn a continuous vector embedding space that groups similar sounding words together and can be used for various downstream computational phonology tasks. The efficacy of the method is presented for two different languages (English, Hindi) and performance gains over previous reported works are discussed on established tests for predicting phonetic similarity. To address limited benchmarking mechanisms in this field, we also introduce a heterographic pun dataset based evaluation methodology to compare the effectiveness of acoustic similarity algorithms. Further, a visualization of the embedding space is presented with a discussion on the various possible use-cases of this novel algorithm. An open-source implementation is also shared to aid reproducibility and enable adoption in related tasks.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
258,072
2308.14434
Using ChatGPT as a Static Application Security Testing Tool
In recent years, artificial intelligence has had a conspicuous growth in almost every aspect of life. One of the most applicable areas is security code review, in which a lot of AI-based tools and approaches have been proposed. Recently, ChatGPT has caught a huge amount of attention with its remarkable performance in following instructions and providing a detailed response. Regarding the similarities between natural language and code, in this paper, we study the feasibility of using ChatGPT for vulnerability detection in Python source code. Toward this goal, we feed an appropriate prompt along with vulnerable data to ChatGPT and compare its results on two datasets with the results of three widely used Static Application Security Testing tools (Bandit, Semgrep and SonarQube). We implement different kinds of experiments with ChatGPT and the results indicate that ChatGPT reduces the false positive and false negative rates and has the potential to be used for Python source code vulnerability detection.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
388,333
2406.07146
Benchmarking and Boosting Radiology Report Generation for 3D High-Resolution Medical Images
Automatic radiology report generation can significantly benefit the labor-intensive process of report writing by radiologists, especially for 3D radiographs like CT scans, which are crucial for broad clinical diagnostics yet underexplored compared to 2D radiographs. Existing methods often handle 3D volumes either slice-wise or with aggressive downsampling due to current GPU memory limitations, which results in a loss of the inherent 3D nature and critical details. To overcome these issues, we introduce a novel framework that efficiently and effectively generates radiology reports for high-resolution (HR) 3D volumes, based on large language models (LLMs). Specifically, our framework utilizes low-resolution (LR) visual tokens as queries to mine information from HR tokens, preserving detailed HR information while reducing computational costs by only processing HR informed LR visual queries. Further benefiting the field, we curate and release BIMCV-RG, a new dataset with 5,328 HR 3D volumes and paired reports, establishing the first benchmarks for report generation from 3D HR medical images. Our method consistently surpasses existing methods on this benchmark across three different settings: normal-resolution, high-resolution inputs, and zero-shot domain transfer, all at an acceptable computational cost, trainable on a single A100-80G.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
462,927