id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2308.14486
Rebalancing Social Feed to Minimize Polarization and Disagreement
Social media have great potential for enabling public discourse on important societal issues. However, adverse effects, such as polarization and echo chambers, greatly impact the benefits of social media and call for algorithms that mitigate these effects. In this paper, we propose a novel problem formulation aimed at slightly nudging users' social feeds in order to strike a balance between relevance and diversity, thus mitigating the emergence of polarization, without lowering the quality of the feed. Our approach is based on re-weighting the relative importance of the accounts that a user follows, so as to calibrate the frequency with which the content produced by various accounts is shown to the user. We analyze the convexity properties of the problem, demonstrating the non-matrix convexity of the objective function and the convexity of the feasible set. To efficiently address the problem, we develop a scalable algorithm based on projected gradient descent. We also prove that our problem statement is a proper generalization of the undirected-case problem so that our method can also be adopted for undirected social networks. As a baseline for comparison in the undirected case, we develop a semidefinite programming approach, which provides the optimal solution. Through extensive experiments on synthetic and real-world datasets, we validate the effectiveness of our approach, which outperforms non-trivial baselines, underscoring its ability to foster healthier and more cohesive online communities.
false
false
false
true
false
false
true
false
false
false
false
false
false
true
false
false
false
false
388,349
2308.10724
Global visibility of publications through Digital Object Identifiers
This brief research report analyzes the availability of Digital Object Identifiers (DOIs) worldwide, highlighting the dominance of large publishing houses and the need for unique persistent identifiers to increase the visibility of publications from developing countries. The study reveals that a considerable amount of publications from developing countries are excluded from the global flow of scientific information due to the absence of DOIs, emphasizing the need for alternative publishing models. The authors suggest that the availability of DOIs should receive more attention in scholarly communication and scientometrics, contributing to a necessary debate on DOIs relevant for librarians, publishers, and scientometricians.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
386,858
2409.18572
Towards an active-learning approach to resource allocation for population-based damage prognosis
Damage prognosis is, arguably, one of the most difficult tasks of structural health monitoring (SHM). To address common problems of damage prognosis, a population-based SHM (PBSHM) approach is adopted in the current work. In this approach the prognosis problem is considered as an information-sharing problem where data from past structures are exploited to make more accurate inferences regarding currently-degrading structures. For a given population, there may exist restrictions on the resources available to conduct monitoring; thus, the current work studies the problem of allocating such resources within a population of degrading structures with a view to maximising the damage-prognosis accuracy. The challenges of the current framework are mainly associated with the inference of outliers on the level of damage evolution, given partial data from the damage-evolution phenomenon. The current approach considers an initial population of structures for which damage evolution is extensively observed. Subsequently, a second population of structures with evolving damage is considered for which two monitoring systems are available, a low-availability and high-fidelity (low-uncertainty) one, and a widely-available and low-fidelity (high-uncertainty) one. The task of the current work is to follow an active-learning approach to identify the structures to which the high-fidelity system should be assigned in order to enhance the predictive capabilities of the machine-learning model throughout the population.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
492,313
2212.11558
DaDe: Delay-adaptive Detector for Streaming Perception
Recognizing the surrounding environment at low latency is critical in autonomous driving. In real-time environment, surrounding environment changes when processing is over. Current detection models are incapable of dealing with changes in the environment that occur after processing. Streaming perception is proposed to assess the latency and accuracy of real-time video perception. However, additional problems arise in real-world applications due to limited hardware resources, high temperatures, and other factors. In this study, we develop a model that can reflect processing delays in real time and produce the most reasonable results. By incorporating the proposed feature queue and feature select module, the system gains the ability to forecast specific time steps without any additional computational costs. Our method is tested on the Argoverse-HD dataset. It achieves higher performance than the current state-of-the-art methods(2022.12) in various environments when delayed . The code is available at https://github.com/danjos95/DADE
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
337,827
2402.17194
The Random Forest Model for Analyzing and Forecasting the US Stock Market in the Context of Smart Finance
The stock market is a crucial component of the financial market, playing a vital role in wealth accumulation for investors, financing costs for listed companies, and the stable development of the national macroeconomy. Significant fluctuations in the stock market can damage the interests of stock investors and cause an imbalance in the industrial structure, which can interfere with the macro level development of the national economy. The prediction of stock price trends is a popular research topic in academia. Predicting the three trends of stock pricesrising, sideways, and falling can assist investors in making informed decisions about buying, holding, or selling stocks. Establishing an effective forecasting model for predicting these trends is of substantial practical importance. This paper evaluates the predictive performance of random forest models combined with artificial intelligence on a test set of four stocks using optimal parameters. The evaluation considers both predictive accuracy and time efficiency.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
432,882
2104.11892
A Survey of Modern Deep Learning based Object Detection Models
Object Detection is the task of classification and localization of objects in an image or video. It has gained prominence in recent years due to its widespread applications. This article surveys recent developments in deep learning based object detectors. Concise overview of benchmark datasets and evaluation metrics used in detection is also provided along with some of the prominent backbone architectures used in recognition tasks. It also covers contemporary lightweight classification models used on edge devices. Lastly, we compare the performances of these architectures on multiple metrics.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
232,050
1910.00033
Hidden Trigger Backdoor Attacks
With the success of deep learning algorithms in various domains, studying adversarial attacks to secure deep models in real world applications has become an important research topic. Backdoor attacks are a form of adversarial attacks on deep networks where the attacker provides poisoned data to the victim to train the model with, and then activates the attack by showing a specific small trigger pattern at the test time. Most state-of-the-art backdoor attacks either provide mislabeled poisoning data that is possible to identify by visual inspection, reveal the trigger in the poisoned data, or use noise to hide the trigger. We propose a novel form of backdoor attack where poisoned data look natural with correct labels and also more importantly, the attacker hides the trigger in the poisoned data and keeps the trigger secret until the test time. We perform an extensive study on various image classification settings and show that our attack can fool the model by pasting the trigger at random locations on unseen images although the model performs well on clean data. We also show that our proposed attack cannot be easily defended using a state-of-the-art defense algorithm for backdoor attacks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
147,550
2202.13804
RestainNet: a self-supervised digital re-stainer for stain normalization
Color inconsistency is an inevitable challenge in computational pathology, which generally happens because of stain intensity variations or sections scanned by different scanners. It harms the pathological image analysis methods, especially the learning-based models. A series of approaches have been proposed for stain normalization. However, most of them are lack flexibility in practice. In this paper, we formulated stain normalization as a digital re-staining process and proposed a self-supervised learning model, which is called RestainNet. Our network is regarded as a digital restainer which learns how to re-stain an unstained (grayscale) image. Two digital stains, Hematoxylin (H) and Eosin (E) were extracted from the original image by Beer-Lambert's Law. We proposed a staining loss to maintain the correctness of stain intensity during the restaining process. Thanks to the self-supervised nature, paired training samples are no longer necessary, which demonstrates great flexibility in practical usage. Our RestainNet outperforms existing approaches and achieves state-of-the-art performance with regard to color correctness and structure preservation. We further conducted experiments on the segmentation and classification tasks and the proposed RestainNet achieved outstanding performance compared with SOTA methods. The self-supervised design allows the network to learn any staining style with no extra effort.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
282,753
2305.05915
A synchronization-capturing multi-scale solver to the noisy integrate-and-fire neuron networks
The noisy leaky integrate-and-fire (NLIF) model describes the voltage configurations of neuron networks with an interacting many-particles system at a microscopic level. When simulating neuron networks of large sizes, computing a coarse-grained mean-field Fokker-Planck equation solving the voltage densities of the networks at a macroscopic level practically serves as a feasible alternative in its high efficiency and credible accuracy. However, the macroscopic model fails to yield valid results of the networks when simulating considerably synchronous networks with active firing events. In this paper, we propose a multi-scale solver for the NLIF networks, which inherits the low cost of the macroscopic solver and the high reliability of the microscopic solver. For each temporal step, the multi-scale solver uses the macroscopic solver when the firing rate of the simulated network is low, while it switches to the microscopic solver when the firing rate tends to blow up. Moreover, the macroscopic and microscopic solvers are integrated with a high-precision switching algorithm to ensure the accuracy of the multi-scale solver. The validity of the multi-scale solver is analyzed from two perspectives: firstly, we provide practically sufficient conditions that guarantee the mean-field approximation of the macroscopic model and present rigorous numerical analysis on simulation errors when coupling the two solvers; secondly, the numerical performance of the multi-scale solver is validated through simulating several large neuron networks, including networks with either instantaneous or periodic input currents which prompt active firing events over a period of time.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
363,339
1711.03896
A Geometric Characterization of Observability in Inertial Parameter Identification
This paper presents an algorithm to geometrically characterize inertial parameter identifiability for an articulated robot. The geometric approach tests identifiability across the infinite space of configurations using only a finite set of conditions and without approximation. It can be applied to general open-chain kinematic trees ranging from industrial manipulators to legged robots, and it is the first solution for this broad set of systems that is provably correct. The high-level operation of the algorithm is based on a key observation: Undetectable changes in inertial parameters can be represented as sequences of inertial transfers across the joints. Drawing on the exponential parameterization of rigid-body kinematics, undetectable inertial transfers are analyzed in terms of observability from linear systems theory. This analysis can be applied recursively, and lends an overall complexity of $O(N)$ to characterize parameter identifiability for a system of $N$ bodies. Matlab source code for the new algorithm is provided.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
84,291
1202.3710
Strictly Proper Mechanisms with Cooperating Players
Prediction markets provide an efficient means to assess uncertain quantities from forecasters. Traditional and competitive strictly proper scoring rules have been shown to incentivize players to provide truthful probabilistic forecasts. However, we show that when those players can cooperate, these mechanisms can instead discourage them from reporting what they really believe. When players with different beliefs are able to cooperate and form a coalition, these mechanisms admit arbitrage and there is a report that will always pay coalition members more than their truthful forecasts. If the coalition were created by an intermediary, such as a web portal, the intermediary would be guaranteed a profit.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
14,382
1904.01301
Pragmatically Informative Text Generation
We improve the informativeness of models for conditional text generation using techniques from computational pragmatics. These techniques formulate language production as a game between speakers and listeners, in which a speaker should generate output text that a listener can use to correctly identify the original input that the text describes. While such approaches are widely used in cognitive science and grounded language learning, they have received less attention for more standard language generation tasks. We consider two pragmatic modeling methods for text generation: one where pragmatics is imposed by information preservation, and another where pragmatics is imposed by explicit modeling of distractors. We find that these methods improve the performance of strong existing systems for abstractive summarization and generation from structured meaning representations.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
126,108
1611.10094
The influence of the network topology on the agility of a supply chain
The right performance of a supply chain depends on the pattern of relationships among firms. Although there is not a general consensus among researchers yet, many studies point that scale-free topologies, where few highly related firms are combined with many low-related firms, assure the highest efficiency of a supply chain. This paper studies the network topology that leads to the highest agility of the supply chain when sudden demand changes occur. To do this, an agent-based model of a supply chain with restricted relationship between agents is built. The model includes three tiers, where the flow of material is distributed from the bottom supplier to the final customer passing necessarily through firms in every tier. Agility is measured in the model simulations through the order fulfillment rate. Unlike to previous theoretical and lab results, the simulation of the model shows that the highest levels of agility are not obtained with a scale-free topology. Instead, homogeneous distribution of links, such as those induced by regular or Poisson probability laws, shows higher agility values than heterogeneous distributions. Other previous recommendations, such as redundancy or having multiple suppliers, are confirmed by the simulations. The general conclusion is that the most suitable network topology in terms of agility depends on the specific conditions of the supply chain and the aspects of the performance to be analyzed.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
64,772
2202.12451
Human-Centered Concept Explanations for Neural Networks
Understanding complex machine learning models such as deep neural networks with explanations is crucial in various applications. Many explanations stem from the model perspective, and may not necessarily effectively communicate why the model is making its predictions at the right level of abstraction. For example, providing importance weights to individual pixels in an image can only express which parts of that particular image are important to the model, but humans may prefer an explanation which explains the prediction by concept-based thinking. In this work, we review the emerging area of concept based explanations. We start by introducing concept explanations including the class of Concept Activation Vectors (CAV) which characterize concepts using vectors in appropriate spaces of neural activations, and discuss different properties of useful concepts, and approaches to measure the usefulness of concept vectors. We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats. Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
282,247
2309.05780
LUNet: Deep Learning for the Segmentation of Arterioles and Venules in High Resolution Fundus Images
The retina is the only part of the human body in which blood vessels can be accessed non-invasively using imaging techniques such as digital fundus images (DFI). The spatial distribution of the retinal microvasculature may change with cardiovascular diseases and thus the eyes may be regarded as a window to our hearts. Computerized segmentation of the retinal arterioles and venules (A/V) is essential for automated microvasculature analysis. Using active learning, we created a new DFI dataset containing 240 crowd-sourced manual A/V segmentations performed by fifteen medical students and reviewed by an ophthalmologist, and developed LUNet, a novel deep learning architecture for high resolution A/V segmentation. LUNet architecture includes a double dilated convolutional block that aims to enhance the receptive field of the model and reduce its parameter count. Furthermore, LUNet has a long tail that operates at high resolution to refine the segmentation. The custom loss function emphasizes the continuity of the blood vessels. LUNet is shown to significantly outperform two state-of-the-art segmentation algorithms on the local test set as well as on four external test sets simulating distribution shifts across ethnicity, comorbidities, and annotators. We make the newly created dataset open access (upon publication).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
391,182
2206.07534
Optimal Synthesis of LTI Koopman Models for Nonlinear Systems with Inputs
A popular technique used to obtain linear representations of nonlinear systems is the so-called Koopman approach, where the nonlinear dynamics are lifted to a (possibly infinite dimensional) linear space through nonlinear functions called observables. In the lifted space, the dynamics are linear and represented by a so-called Koopman operator. While the Koopman theory was originally introduced for autonomous systems, it has been widely used to derive linear time-invariant (LTI) models for nonlinear systems with inputs through various approximation schemes such as the extended dynamics mode decomposition (EDMD). However, recent extensions of the Koopman theory show that the lifting process for such systems results in a linear parameter-varying (LPV) model instead of an LTI form. As LTI Koopman model based control has been successfully used in practice and it is generally temping to use such LTI descriptions of nonlinear systems, due to the simplicity of the associated control tool chain, a systematic approach is needed to synthesise optimal LTI approximations of LPV Koopman models compared to the ad-hoc schemes such as EDMD, which is based on least-squares regression. In this work, we introduce optimal LTI Koopman approximations of exact Koopman models of nonlinear systems with inputs by using l2-gain and generalized H2 norm performance measures. We demonstrate the advantages of the proposed Koopman modelling procedure compared to EDMD.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
302,775
2010.13018
Adversarial Robust Low Rank Matrix Estimation: Compressed Sensing and Matrix Completion
We consider robust low rank matrix estimation as a trace regression when outputs are contaminated by adversaries. The adversaries are allowed to add arbitrary values to arbitrary outputs. Such values can depend on any samples. We deal with matrix compressed sensing, including lasso as a partial problem, and matrix completion, and then we obtain sharp estimation error bounds. To obtain the error bounds for different models such as matrix compressed sensing and matrix completion, we propose a simple unified approach based on a combination of the Huber loss function and the nuclear norm penalization, which is a different approach from the conventional ones. Some error bounds obtained in the present paper are sharper than the past ones.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
202,967
1704.04860
Caching Policy Optimization for D2D Communications by Learning User Preference
Cache-enabled device-to-device (D2D) communications can boost network throughput. By pre-downloading contents to local caches of users, the content requested by a user can be transmitted via D2D links by other users in proximity. Prior works optimize the caching policy at users with the knowledge of content popularity, defined as the probability distribution of request for every file in a library from by all users. However, content popularity can not reflect the interest of each individual user and thus popularity-based caching policy may not fully capture the performance gain introduced by caching. In this paper, we optimize caching policy for cache-enabled D2D by learning user preference, defined as the conditional probability distribution of a user's request for a file given that the user sends a request. We first formulate an optimization problem with given user preference to maximize the offloading probability, which is proved as NP-hard, and then provide a greedy algorithm to find the solution. In order to predict the preference of each individual user, we model the user request behavior by probabilistic latent semantic analysis (pLSA), and then apply expectation maximization (EM) algorithm to estimate the model parameters. Simulation results show that the user preference can be learnt quickly. Compared to the popularity-based caching policy, the offloading gain achieved by the proposed policy can be remarkably improved even with predicted user preference.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
71,907
1911.10298
CoverNet: Multimodal Behavior Prediction using Trajectory Sets
We present CoverNet, a new method for multimodal, probabilistic trajectory prediction for urban driving. Previous work has employed a variety of methods, including multimodal regression, occupancy maps, and 1-step stochastic policies. We instead frame the trajectory prediction problem as classification over a diverse set of trajectories. The size of this set remains manageable due to the limited number of distinct actions that can be taken over a reasonable prediction horizon. We structure the trajectory set to a) ensure a desired level of coverage of the state space, and b) eliminate physically impossible trajectories. By dynamically generating trajectory sets based on the agent's current state, we can further improve our method's efficiency. We demonstrate our approach on public, real-world self-driving datasets, and show that it outperforms state-of-the-art methods.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
154,782
2404.13679
GScream: Learning 3D Geometry and Feature Consistent Gaussian Splatting for Object Removal
This paper tackles the intricate challenge of object removal to update the radiance field using the 3D Gaussian Splatting. The main challenges of this task lie in the preservation of geometric consistency and the maintenance of texture coherence in the presence of the substantial discrete nature of Gaussian primitives. We introduce a robust framework specifically designed to overcome these obstacles. The key insight of our approach is the enhancement of information exchange among visible and invisible areas, facilitating content restoration in terms of both geometry and texture. Our methodology begins with optimizing the positioning of Gaussian primitives to improve geometric consistency across both removed and visible areas, guided by an online registration process informed by monocular depth estimation. Following this, we employ a novel feature propagation mechanism to bolster texture coherence, leveraging a cross-attention design that bridges sampling Gaussians from both uncertain and certain areas. This innovative approach significantly refines the texture coherence within the final radiance field. Extensive experiments validate that our method not only elevates the quality of novel view synthesis for scenes undergoing object removal but also showcases notable efficiency gains in training and rendering speeds.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
448,396
2004.14584
Out-of-the-box channel pruned networks
In the last decade convolutional neural networks have become gargantuan. Pre-trained models, when used as initializers are able to fine-tune ever larger networks on small datasets. Consequently, not all the convolutional features that these fine-tuned models detect are requisite for the end-task. Several works of channel pruning have been proposed to prune away compute and memory from models that were trained already. Typically, these involve policies that decide which and how many channels to remove from each layer leading to channel-wise and/or layer-wise pruning profiles, respectively. In this paper, we conduct several baseline experiments and establish that profiles from random channel-wise pruning policies are as good as metric-based ones. We also establish that there may exist profiles from some layer-wise pruning policies that are measurably better than common baselines. We then demonstrate that the top layer-wise pruning profiles found using an exhaustive random search from one datatset are also among the top profiles for other datasets. This implies that we could identify out-of-the-box layer-wise pruning profiles using benchmark datasets and use these directly for new datasets. Furthermore, we develop a Reinforcement Learning (RL) policy-based search algorithm with a direct objective of finding transferable layer-wise pruning profiles using many models for the same architecture. We use a novel reward formulation that drives this RL search towards an expected compression while maximizing accuracy. Our results show that our transferred RL-based profiles are as good or better than best profiles found on the original dataset via exhaustive search. We then demonstrate that if we found the profiles using a mid-sized dataset such as Cifar10/100, we are able to transfer them to even a large dataset such as Imagenet.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
174,944
2106.14184
Memory Guided Road Detection
In self driving car applications, there is a requirement to predict the location of the lane given an input RGB front facing image. In this paper, we propose an architecture that allows us to increase the speed and robustness of road detection without a large hit in accuracy by introducing an underlying shared feature space that is propagated over time, which serves as a flowing dynamic memory. By utilizing the gist of previous frames, we train the network to predict the current road with a greater accuracy and lesser deviation from previous frames.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
243,322
2305.10434
Learning the Visualness of Text Using Large Vision-Language Models
Visual text evokes an image in a person's mind, while non-visual text fails to do so. A method to automatically detect visualness in text will enable text-to-image retrieval and generation models to augment text with relevant images. This is particularly challenging with long-form text as text-to-image generation and retrieval models are often triggered for text that is designed to be explicitly visual in nature, whereas long-form text could contain many non-visual sentences. To this end, we curate a dataset of 3,620 English sentences and their visualness scores provided by multiple human annotators. We also propose a fine-tuning strategy that adapts large vision-language models like CLIP by modifying the model's contrastive learning objective to map text identified as non-visual to a common NULL image while matching visual text to their corresponding images in the document. We evaluate the proposed approach on its ability to (i) classify visual and non-visual text accurately, and (ii) attend over words that are identified as visual in psycholinguistic studies. Empirical evaluation indicates that our approach performs better than several heuristics and baseline models for the proposed task. Furthermore, to highlight the importance of modeling the visualness of text, we conduct qualitative analyses of text-to-image generation systems like DALL-E. Project webpage: https://gaurav22verma.github.io/text-visualness/
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
365,060
1404.3026
On the Ground Validation of Online Diagnosis with Twitter and Medical Records
Social media has been considered as a data source for tracking disease. However, most analyses are based on models that prioritize strong correlation with population-level disease rates over determining whether or not specific individual users are actually sick. Taking a different approach, we develop a novel system for social-media based disease detection at the individual level using a sample of professionally diagnosed individuals. Specifically, we develop a system for making an accurate influenza diagnosis based on an individual's publicly available Twitter data. We find that about half (17/35 = 48.57%) of the users in our sample that were sick explicitly discuss their disease on Twitter. By developing a meta classifier that combines text analysis, anomaly detection, and social network analysis, we are able to diagnose an individual with greater than 99% accuracy even if she does not discuss her health.
false
false
false
true
false
false
true
false
true
false
false
false
false
false
false
false
false
false
32,265
2202.09374
Explaining, Evaluating and Enhancing Neural Networks' Learned Representations
Most efforts in interpretability in deep learning have focused on (1) extracting explanations of a specific downstream task in relation to the input features and (2) imposing constraints on the model, often at the expense of predictive performance. New advances in (unsupervised) representation learning and transfer learning, however, raise the need for an explanatory framework for networks that are trained without a specific downstream task. We address these challenges by showing how explainability can be an aid, rather than an obstacle, towards better and more efficient representations. Specifically, we propose a natural aggregation method generalizing attribution maps between any two (convolutional) layers of a neural network. Additionally, we employ such attributions to define two novel scores for evaluating the informativeness and the disentanglement of latent embeddings. Extensive experiments show that the proposed scores do correlate with the desired properties. We also confirm and extend previously known results concerning the independence of some common saliency strategies from the model parameters. Finally, we show that adopting our proposed scores as constraints during the training of a representation learning task improves the downstream performance of the model.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
281,173
2305.12073
GELU Activation Function in Deep Learning: A Comprehensive Mathematical Analysis and Performance
Selecting the most suitable activation function is a critical factor in the effectiveness of deep learning models, as it influences their learning capacity, stability, and computational efficiency. In recent years, the Gaussian Error Linear Unit (GELU) activation function has emerged as a dominant method, surpassing traditional functions such as the Rectified Linear Unit (ReLU) in various applications. This study presents a rigorous mathematical investigation of the GELU activation function, exploring its differentiability, boundedness, stationarity, and smoothness properties in detail. Additionally, we conduct an extensive experimental comparison of the GELU function against a broad range of alternative activation functions, utilizing a residual convolutional network trained on the CIFAR-10, CIFAR-100, and STL-10 datasets as the empirical testbed. Our results demonstrate the superior performance of GELU compared to other activation functions, establishing its suitability for a wide range of deep learning applications. This comprehensive study contributes to a more profound understanding of the underlying mathematical properties of GELU and provides valuable insights for practitioners aiming to select activation functions that optimally align with their specific objectives and constraints in deep learning.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
true
false
false
365,821
2111.01203
One Proxy Device Is Enough for Hardware-Aware Neural Architecture Search
Convolutional neural networks (CNNs) are used in numerous real-world applications such as vision-based autonomous driving and video content analysis. To run CNN inference on various target devices, hardware-aware neural architecture search (NAS) is crucial. A key requirement of efficient hardware-aware NAS is the fast evaluation of inference latencies in order to rank different architectures. While building a latency predictor for each target device has been commonly used in state of the art, this is a very time-consuming process, lacking scalability in the presence of extremely diverse devices. In this work, we address the scalability challenge by exploiting latency monotonicity -- the architecture latency rankings on different devices are often correlated. When strong latency monotonicity exists, we can re-use architectures searched for one proxy device on new target devices, without losing optimality. In the absence of strong latency monotonicity, we propose an efficient proxy adaptation technique to significantly boost the latency monotonicity. Finally, we validate our approach and conduct experiments with devices of different platforms on multiple mainstream search spaces, including MobileNet-V2, MobileNet-V3, NAS-Bench-201, ProxylessNAS and FBNet. Our results highlight that, by using just one proxy device, we can find almost the same Pareto-optimal architectures as the existing per-device NAS, while avoiding the prohibitive cost of building a latency predictor for each device. GitHub: https://github.com/Ren-Research/OneProxy
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
264,480
2101.00036
KART: Parameterization of Privacy Leakage Scenarios from Pre-trained Language Models
For the safe sharing pre-trained language models, no guidelines exist at present owing to the difficulty in estimating the upper bound of the risk of privacy leakage. One problem is that previous studies have assessed the risk for different real-world privacy leakage scenarios and attack methods, which reduces the portability of the findings. To tackle this problem, we represent complex real-world privacy leakage scenarios under a universal parameterization, \textit{Knowledge, Anonymization, Resource, and Target} (KART). KART parameterization has two merits: (i) it clarifies the definition of privacy leakage in each experiment and (ii) it improves the comparability of the findings of risk assessments. We show that previous studies can be simply reviewed by parameterizing the scenarios with KART. We also demonstrate privacy risk assessments in different scenarios under the same attack method, which suggests that KART helps approximate the upper bound of risk under a specific attack or scenario. We believe that KART helps integrate past and future findings on privacy risk and will contribute to a standard for sharing language models.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
213,942
1404.1872
Int\'egration des donn\'ees d'un lexique syntaxique dans un analyseur syntaxique probabiliste
This article reports the evaluation of the integration of data from a syntactic-semantic lexicon, the Lexicon-Grammar of French, into a syntactic parser. We show that by changing the set of labels for verbs and predicational nouns, we can improve the performance on French of a non-lexicalized probabilistic parser.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
32,154
2112.12321
Physics Constrained Flow Neural Network for Short-Timescale Predictions in Data Communications Networks
Machine learning is gaining growing momentum in various recent models for the dynamic analysis of information flows in data communications networks. These preliminary models often rely on off-the-shelf learning models to predict from historical statistics while disregarding the physics governing the generating behaviors of these flows. This paper instead introduces Flow Neural Network (FlowNN) to improve the feature representation with learned physical bias. This is implemented by an induction layer, working upon the embedding layer, to impose the physics connected data correlations, and a self-supervised learning strategy with stop-gradient to make the learned physics universal. For the short-timescale network prediction tasks, FlowNN achieves 17% - 71% of loss decrease than the state-of-the-art baselines on both synthetic and real-world networking datasets, which shows the strength of this new approach.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
272,933
1805.10611
Robust Hypothesis Testing Using Wasserstein Uncertainty Sets
We develop a novel computationally efficient and general framework for robust hypothesis testing. The new framework features a new way to construct uncertainty sets under the null and the alternative distributions, which are sets centered around the empirical distribution defined via Wasserstein metric, thus our approach is data-driven and free of distributional assumptions. We develop a convex safe approximation of the minimax formulation and show that such approximation renders a nearly-optimal detector among the family of all possible tests. By exploiting the structure of the least favorable distribution, we also develop a tractable reformulation of such approximation, with complexity independent of the dimension of observation space and can be nearly sample-size-independent in general. Real-data example using human activity data demonstrated the excellent performance of the new robust detector.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
98,723
2310.16295
Instance-wise Linearization of Neural Network for Model Interpretation
Neural network have achieved remarkable successes in many scientific fields. However, the interpretability of the neural network model is still a major bottlenecks to deploy such technique into our daily life. The challenge can dive into the non-linear behavior of the neural network, which rises a critical question that how a model use input feature to make a decision. The classical approach to address this challenge is feature attribution, which assigns an important score to each input feature and reveal its importance of current prediction. However, current feature attribution approaches often indicate the importance of each input feature without detail of how they are actually processed by a model internally. These attribution approaches often raise a concern that whether they highlight correct features for a model prediction. For a neural network model, the non-linear behavior is often caused by non-linear activation units of a model. However, the computation behavior of a prediction from a neural network model is locally linear, because one prediction has only one activation pattern. Base on the observation, we propose an instance-wise linearization approach to reformulates the forward computation process of a neural network prediction. This approach reformulates different layers of convolution neural networks into linear matrix multiplication. Aggregating all layers' computation, a prediction complex convolution neural network operations can be described as a linear matrix multiplication $F(x) = W \cdot x + b$. This equation can not only provides a feature attribution map that highlights the important of the input features but also tells how each input feature contributes to a prediction exactly. Furthermore, we discuss the application of this technique in both supervise classification and unsupervised neural network learning parametric t-SNE dimension reduction.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
402,662
1306.1511
SPATA: A Seeding and Patching Algorithm for Hybrid Transcriptome Assembly
Transcriptome assembly from RNA-Seq reads is an active area of bioinformatics research. The ever-declining cost and the increasing depth of RNA-Seq have provided unprecedented opportunities to better identify expressed transcripts. However, the nonlinear transcript structures and the ultra-high throughput of RNA-Seq reads pose significant algorithmic and computational challenges to the existing transcriptome assembly approaches, either reference-guided or de novo. While reference-guided approaches offer good sensitivity, they rely on alignment results of the splice-aware aligners and are thus unsuitable for species with incomplete reference genomes. In contrast, de novo approaches do not depend on the reference genome but face a computational daunting task derived from the complexity of the graph built for the whole transcriptome. In response to these challenges, we present a hybrid approach to exploit an incomplete reference genome without relying on splice-aware aligners. We have designed a split-and-align procedure to efficiently localize the reads to individual genomic loci, which is followed by an accurate de novo assembly to assemble reads falling into each locus. Using extensive simulation data, we demonstrate a high accuracy and precision in transcriptome reconstruction by comparing to selected transcriptome assembly tools. Our method is implemented in assemblySAM, a GUI software freely available at http://sammate.sourceforge.net.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
25,053
1608.01059
Analyzing Linear Dynamical Systems: From Modeling to Coding and Learning
Encoding time-series with Linear Dynamical Systems (LDSs) leads to rich models with applications ranging from dynamical texture recognition to video segmentation to name a few. In this paper, we propose to represent LDSs with infinite-dimensional subspaces and derive an analytic solution to obtain stable LDSs. We then devise efficient algorithms to perform sparse coding and dictionary learning on the space of infinite-dimensional subspaces. In particular, two solutions are developed to sparsely encode an LDS. In the first method, we map the subspaces into a Reproducing Kernel Hilbert Space (RKHS) and achieve our goal through kernel sparse coding. As for the second solution, we propose to embed the infinite-dimensional subspaces into the space of symmetric matrices and formulate the sparse coding accordingly in the induced space. For dictionary learning, we encode time-series by introducing a novel concept, namely the two-fold LDSs. We then make use of the two-fold LDSs to derive an analytical form for updating atoms of an LDS dictionary, i.e., each atom is an LDS itself. Compared to several baselines and state-of-the-art methods, the proposed methods yield higher accuracies in various classification tasks including video classification and tactile recognition.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
59,379
2111.11112
Data Sensing and Offloading in Edge Computing Networks: TDMA or NOMA?
With the development of Internet-of-Things (IoT), we witness the explosive growth in the number of devices with sensing, computing, and communication capabilities, along with a large amount of raw data generated at the network edge. Mobile (multi-access) edge computing (MEC), acquiring and processing data at network edge (like base station (BS)) via wireless links, has emerged as a promising technique for real-time applications. In this paper, we consider the scenario that multiple devices sense then offload data to an edge server/BS, and the offloading throughput maximization problems are studied by joint radio-and-computation resource allocation, based on time-division multiple access (TDMA) and non-orthogonal multiple access (NOMA) multiuser computation offloading. Particularly, we take the sequence of TDMA-based multiuser transmission/offloading into account. The studied problems are NP-hard and non-convex. A set of low-complexity algorithms are designed based on decomposition approach and exploration of valuable insights of problems. They are either optimal or can achieve close-to-optimal performance as shown by simulation. The comprehensive simulation results show that the sequence optimized TDMA scheme achieves better throughput performance than the NOMA scheme, while the NOMA scheme is better under the assumptions of time-sharing strategy and the identical sensing capability of the devices.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
267,552
2301.13848
Benchmarking Large Language Models for News Summarization
Large language models (LLMs) have shown promise for automatic summarization but the reasons behind their successes are poorly understood. By conducting a human evaluation on ten LLMs across different pretraining methods, prompts, and model scales, we make two important observations. First, we find instruction tuning, and not model size, is the key to the LLM's zero-shot summarization capability. Second, existing studies have been limited by low-quality references, leading to underestimates of human performance and lower few-shot and finetuning performance. To better evaluate LLMs, we perform human evaluation over high-quality summaries we collect from freelance writers. Despite major stylistic differences such as the amount of paraphrasing, we find that LMM summaries are judged to be on par with human written summaries.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
343,054
2408.07726
Graph neural network surrogate for strategic transport planning
As the complexities of urban environments continue to grow, the modelling of transportation systems become increasingly challenging. This paper explores the application of advanced Graph Neural Network (GNN) architectures as surrogate models for strategic transport planning. Building upon a prior work that laid the foundation with graph convolution networks (GCN), our study delves into the comparative analysis of established GCN with the more expressive Graph Attention Network (GAT). Additionally, we propose a novel GAT variant (namely GATv3) to address over-smoothing issues in graph-based models. Our investigation also includes the exploration of a hybrid model combining both GCN and GAT architectures, aiming to investigate the performance of the mixture. The three models are applied to various experiments to understand their limits. We analyse hierarchical regression setups, combining classification and regression tasks, and introduce fine-grained classification with a proposal of a method to convert outputs to precise values. Results reveal the superior performance of the new GAT in classification tasks. To the best of the authors' knowledge, this is the first GAT model in literature to achieve larger depths. Surprisingly, the fine-grained classification task demonstrates the GCN's unexpected dominance with additional training data. This shows that synthetic data generators can increase the training data, without overfitting issues whilst improving model performance. In conclusion, this research advances GNN based surrogate modelling, providing insights for refining GNN architectures. The findings open avenues for investigating the potential of the newly proposed GAT architecture and the modelling setups for other transportation problems.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
480,703
2409.13513
Efficient and Discriminative Image Feature Extraction for Universal Image Retrieval
Current image retrieval systems often face domain specificity and generalization issues. This study aims to overcome these limitations by developing a computationally efficient training framework for a universal feature extractor that provides strong semantic image representations across various domains. To this end, we curated a multi-domain training dataset, called M4D-35k, which allows for resource-efficient training. Additionally, we conduct an extensive evaluation and comparison of various state-of-the-art visual-semantic foundation models and margin-based metric learning loss functions regarding their suitability for efficient universal feature extraction. Despite constrained computational resources, we achieve near state-of-the-art results on the Google Universal Image Embedding Challenge, with a mMP@5 of 0.721. This places our method at the second rank on the leaderboard, just 0.7 percentage points behind the best performing method. However, our model has 32% fewer overall parameters and 289 times fewer trainable parameters. Compared to methods with similar computational requirements, we outperform the previous state of the art by 3.3 percentage points. We release our code and M4D-35k training set annotations at https://github.com/morrisfl/UniFEx.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
490,025
2302.06039
Predicting Class Distribution Shift for Reliable Domain Adaptive Object Detection
Unsupervised Domain Adaptive Object Detection (UDA-OD) uses unlabelled data to improve the reliability of robotic vision systems in open-world environments. Previous approaches to UDA-OD based on self-training have been effective in overcoming changes in the general appearance of images. However, shifts in a robot's deployment environment can also impact the likelihood that different objects will occur, termed class distribution shift. Motivated by this, we propose a framework for explicitly addressing class distribution shift to improve pseudo-label reliability in self-training. Our approach uses the domain invariance and contextual understanding of a pre-trained joint vision and language model to predict the class distribution of unlabelled data. By aligning the class distribution of pseudo-labels with this prediction, we provide weak supervision of pseudo-label accuracy. To further account for low quality pseudo-labels early in self-training, we propose an approach to dynamically adjust the number of pseudo-labels per image based on model confidence. Our method outperforms state-of-the-art approaches on several benchmarks, including a 4.7 mAP improvement when facing challenging class distribution shift.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
345,266
1805.01369
Framewise approach in multimodal emotion recognition in OMG challenge
In this report we described our approach achieves $53\%$ of unweighted accuracy over $7$ emotions and $0.05$ and $0.09$ mean squared errors for arousal and valence in OMG emotion recognition challenge. Our results were obtained with ensemble of single modality models trained on voice and face data from video separately. We consider each stream as a sequence of frames. Next we estimated features from frames and handle it with recurrent neural network. As audio frame we mean short $0.4$ second spectrogram interval. For features estimation for face pictures we used own ResNet neural network pretrained on AffectNet database. Each short spectrogram was considered as a picture and processed by convolutional network too. As a base audio model we used ResNet pretrained in speaker recognition task. Predictions from both modalities were fused on decision level and improve single-channel approaches by a few percent
false
false
false
false
true
false
false
false
true
false
false
true
false
false
false
false
false
false
96,646
2103.04941
InFillmore: Frame-Guided Language Generation with Bidirectional Context
We propose a structured extension to bidirectional-context conditional language generation, or "infilling," inspired by Frame Semantic theory (Fillmore, 1976). Guidance is provided through two approaches: (1) model fine-tuning, conditioning directly on observed symbolic frames, and (2) a novel extension to disjunctive lexically constrained decoding that leverages frame semantic lexical units. Automatic and human evaluations confirm that frame-guided generation allows for explicit manipulation of intended infill semantics, with minimal loss in distinguishability from human-generated text. Our methods flexibly apply to a variety of use scenarios, and we provide a codebase and interactive demo available from https://nlp.jhu.edu/demos/infillmore.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
223,810
2502.13954
Latent Distribution Decoupling: A Probabilistic Framework for Uncertainty-Aware Multimodal Emotion Recognition
Multimodal multi-label emotion recognition (MMER) aims to identify the concurrent presence of multiple emotions in multimodal data. Existing studies primarily focus on improving fusion strategies and modeling modality-to-label dependencies. However, they often overlook the impact of \textbf{aleatoric uncertainty}, which is the inherent noise in the multimodal data and hinders the effectiveness of modality fusion by introducing ambiguity into feature representations. To address this issue and effectively model aleatoric uncertainty, this paper proposes Latent emotional Distribution Decomposition with Uncertainty perception (LDDU) framework from a novel perspective of latent emotional space probabilistic modeling. Specifically, we introduce a contrastive disentangled distribution mechanism within the emotion space to model the multimodal data, allowing for the extraction of semantic features and uncertainty. Furthermore, we design an uncertainty-aware fusion multimodal method that accounts for the dispersed distribution of uncertainty and integrates distribution information. Experimental results show that LDDU achieves state-of-the-art performance on the CMU-MOSEI and M$^3$ED datasets, highlighting the importance of uncertainty modeling in MMER. Code is available at https://github.com/201983290498/lddu\_mmer.git.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
535,590
1803.03503
Construction of neural networks for realization of localized deep learning
The subject of deep learning has recently attracted users of machine learning from various disciplines, including: medical diagnosis and bioinformatics, financial market analysis and online advertisement, speech and handwriting recognition, computer vision and natural language processing, time series forecasting, and search engines. However, theoretical development of deep learning is still at its infancy. The objective of this paper is to introduce a deep neural network (also called deep-net) approach to localized manifold learning, with each hidden layer endowed with a specific learning task. For the purpose of illustrations, we only focus on deep-nets with three hidden layers, with the first layer for dimensionality reduction, the second layer for bias reduction, and the third layer for variance reduction. A feedback component also designed to eliminate outliers. The main theoretical result in this paper is the order $\mathcal O\left(m^{-2s/(2s+d)}\right)$ of approximation of the regression function with regularity $s$, in terms of the number $m$ of sample points, where the (unknown) manifold dimension $d$ replaces the dimension $D$ of the sampling (Euclidean) space for shallow nets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
92,263
2311.18200
INarIG: Iterative Non-autoregressive Instruct Generation Model For Word-Level Auto Completion
Computer-aided translation (CAT) aims to enhance human translation efficiency and is still important in scenarios where machine translation cannot meet quality requirements. One fundamental task within this field is Word-Level Auto Completion (WLAC). WLAC predicts a target word given a source sentence, translation context, and a human typed character sequence. Previous works either employ word classification models to exploit contextual information from both sides of the target word or directly disregarded the dependencies from the right-side context. Furthermore, the key information, i.e. human typed sequences, is only used as prefix constraints in the decoding module. In this paper, we propose the INarIG (Iterative Non-autoregressive Instruct Generation) model, which constructs the human typed sequence into Instruction Unit and employs iterative decoding with subwords to fully utilize input information given in the task. Our model is more competent in dealing with low-frequency words (core scenario of this task), and achieves state-of-the-art results on the WMT22 and benchmark datasets, with a maximum increase of over 10% prediction accuracy.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
411,597
2411.18630
Volume Rendering of Human Hand Anatomy
We study the design of transfer functions for volumetric rendering of magnetic resonance imaging (MRI) datasets of human hands. Human hands are anatomically complex, containing various organs within a limited space, which presents challenges for volumetric rendering. We focus on hand musculoskeletal organs because they are volumetrically the largest inside the hand, and most important for the hand's main function, namely manipulation of objects. While volumetric rendering is a mature field, the choice of the transfer function for the different organs is arguably just as important as the choice of the specific volume rendering algorithm; we demonstrate that it significantly influences the clarity and interpretability of the resulting images. We assume that the hand MRI scans have already been segmented into the different organs (bones, muscles, tendons, ligaments, subcutaneous fat, etc.). Our method uses the hand MRI volume data, and the geometry of its inner organs and their known segmentation, to produce high-quality volume rendering images of the hand, and permits fine control over the appearance of each tissue. We contribute two families of transfer functions to emphasize different hand tissues of interest, while preserving the visual context of the hand. We also discuss and reduce artifacts present in standard volume ray-casting of human hands. We evaluate our volumetric rendering on five challenging hand motion sequences. Our experimental results demonstrate that our method improves hand anatomy visualization, compared to standard surface and volume rendering techniques.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
511,937
2403.11590
HSEmotion Team at the 6th ABAW Competition: Facial Expressions, Valence-Arousal and Emotion Intensity Prediction
This article presents our results for the sixth Affective Behavior Analysis in-the-wild (ABAW) competition. To improve the trustworthiness of facial analysis, we study the possibility of using pre-trained deep models that extract reliable emotional features without the need to fine-tune the neural networks for a downstream task. In particular, we introduce several lightweight models based on MobileViT, MobileFaceNet, EfficientNet, and DDAMFN architectures trained in multi-task scenarios to recognize facial expressions, valence, and arousal on static photos. These neural networks extract frame-level features fed into a simple classifier, e.g., linear feed-forward neural network, to predict emotion intensity, compound expressions, action units, facial expressions, and valence/arousal. Experimental results for five tasks from the sixth ABAW challenge demonstrate that our approach lets us significantly improve quality metrics on validation sets compared to existing non-ensemble techniques.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
438,774
2502.13825
Mixup Regularization: A Probabilistic Perspective
In recent years, mixup regularization has gained popularity as an effective way to improve the generalization performance of deep learning models by training on convex combinations of training data. While many mixup variants have been explored, the proper adoption of the technique to conditional density estimation and probabilistic machine learning remains relatively unexplored. This work introduces a novel framework for mixup regularization based on probabilistic fusion that is better suited for conditional density estimation tasks. For data distributed according to a member of the exponential family, we show that likelihood functions can be analytically fused using log-linear pooling. We further propose an extension of probabilistic mixup, which allows for fusion of inputs at an arbitrary intermediate layer of the neural network. We provide a theoretical analysis comparing our approach to standard mixup variants. Empirical results on synthetic and real datasets demonstrate the benefits of our proposed framework compared to existing mixup variants.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
535,534
1910.09358
A Decision-Theoretic Approach for Model Interpretability in Bayesian Framework
A salient approach to interpretable machine learning is to restrict modeling to simple models. In the Bayesian framework, this can be pursued by restricting the model structure and prior to favor interpretable models. Fundamentally, however, interpretability is about users' preferences, not the data generation mechanism; it is more natural to formulate interpretability as a utility function. In this work, we propose an interpretability utility, which explicates the trade-off between explanation fidelity and interpretability in the Bayesian framework. The method consists of two steps. First, a reference model, possibly a black-box Bayesian predictive model which does not compromise accuracy, is fitted to the training data. Second, a proxy model from an interpretable model family that best mimics the predictive behaviour of the reference model is found by optimizing the interpretability utility function. The approach is model agnostic -- neither the interpretable model nor the reference model are restricted to a certain class of models -- and the optimization problem can be solved using standard tools. Through experiments on real-word data sets, using decision trees as interpretable models and Bayesian additive regression models as reference models, we show that for the same level of interpretability, our approach generates more accurate models than the alternative of restricting the prior. We also propose a systematic way to measure stability of interpretabile models constructed by different interpretability approaches and show that our proposed approach generates more stable models.
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
150,168
2310.15670
Leveraging Vision-Centric Multi-Modal Expertise for 3D Object Detection
Current research is primarily dedicated to advancing the accuracy of camera-only 3D object detectors (apprentice) through the knowledge transferred from LiDAR- or multi-modal-based counterparts (expert). However, the presence of the domain gap between LiDAR and camera features, coupled with the inherent incompatibility in temporal fusion, significantly hinders the effectiveness of distillation-based enhancements for apprentices. Motivated by the success of uni-modal distillation, an apprentice-friendly expert model would predominantly rely on camera features, while still achieving comparable performance to multi-modal models. To this end, we introduce VCD, a framework to improve the camera-only apprentice model, including an apprentice-friendly multi-modal expert and temporal-fusion-friendly distillation supervision. The multi-modal expert VCD-E adopts an identical structure as that of the camera-only apprentice in order to alleviate the feature disparity, and leverages LiDAR input as a depth prior to reconstruct the 3D scene, achieving the performance on par with other heterogeneous multi-modal experts. Additionally, a fine-grained trajectory-based distillation module is introduced with the purpose of individually rectifying the motion misalignment for each object in the scene. With those improvements, our camera-only apprentice VCD-A sets new state-of-the-art on nuScenes with a score of 63.1% NDS.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
402,400
1910.07521
End-to-End Cascaded U-Nets with a Localization Network for Kidney Tumor Segmentation
Kidney tumor segmentation emerges as a new frontier of computer vision in medical imaging. This is partly due to its challenging manual annotation and great medical impact. Within the scope of the Kidney Tumor Segmentation Challenge 2019, that is aiming at combined kidney and tumor segmentation, this work proposes a novel combination of 3D U-Nets---collectively denoted TuNet---utilizing the resulting kidney masks for the consecutive tumor segmentation. The proposed method achieves a S{\o}rensen-Dice coefficient score of 0.902 for the kidney, and 0.408 for the tumor segmentation, computed from a five-fold cross-validation on the 210 patients available in the data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
149,646
1807.05670
Wireless Powered Communication Networks: TDD or FDD?
In this paper, we compare two common modes of duplexing in wireless powered communication networks (WPCN); namely TDD and FDD. So far, TDD has been the most widely used duplexing technique due to its simplicity. Yet, TDD does not allow the energy transmitter to function continuously, which means to deliver the same amount of energy as that in FDD, the transmitter has to have a higher maximum transmit power. On the other hand, when regulations for power spectral density limits are not restrictive, using FDD may lead to higher throughput than that of TDD by allocating less bandwidth to energy and therefore leaving more bandwidth for data. Hence, the best duplexing technique to choose for a specific problem needs careful examination and evaluation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
102,974
2102.01503
A Survey On (Stochastic Fractal Search) Algorithm
Evolutionary Algorithms are naturally inspired approximation optimisation algorithms that usually interfere with science problems when common mathematical methods are unable to provide a good solution or finding the exact solution requires an unreasonable amount of time using traditional exhaustive search algorithms. The success of these population-based frameworks is mainly due to their flexibility and ease of adaptation to the most different and complex optimisation problems. This paper presents a metaheuristic algorithm called Stochastic Fractal Search, inspired by the natural phenomenon of growth based on a mathematical concept called the fractal, which is shown to be able to explore the search space more efficiently. This paper also focuses on the algorithm steps and some example applications of engineering design optimisation problems commonly used in the literature being applied to the proposed algorithm.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
218,134
2406.12277
What Matters in Memorizing and Recalling Facts? Multifaceted Benchmarks for Knowledge Probing in Language Models
Language models often struggle with handling factual knowledge, exhibiting factual hallucination issue. This makes it vital to evaluate the models' ability to recall its parametric knowledge about facts. In this study, we introduce a knowledge probing benchmark, BELIEF(ICL), to evaluate the knowledge recall ability of both encoder- and decoder-based pre-trained language models (PLMs) from diverse perspectives. BELIEFs utilize a multi-prompt dataset to evaluate PLM's accuracy, consistency, and reliability in factual knowledge recall. To enable a more reliable evaluation with BELIEFs, we semi-automatically create MyriadLAMA, which has massively diverse prompts. We validate the effectiveness of BELIEFs in comprehensively evaluating PLM's knowledge recall ability on diverse PLMs, including recent large language models (LLMs). We then investigate key factors in memorizing and recalling facts in PLMs, such as model size, pretraining strategy and corpora, instruction-tuning process and in-context learning settings. Finally, we reveal the limitation of the prompt-based knowledge probing. The MyriadLAMA is publicized.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
465,317
2212.04567
Enhanced prediction accuracy with uncertainty quantification in monitoring CO2 sequestration using convolutional neural networks
Monitoring changes inside a reservoir in real time is crucial for the success of CO2 injection and long-term storage. Machine learning (ML) is well-suited for real-time CO2 monitoring because of its computational efficiency. However, most existing applications of ML yield only one prediction (i.e., the expectation) for a given input, which may not properly reflect the distribution of the testing data, if it has a shift with respect to that of the training data. The Simultaneous Quantile Regression (SQR) method can estimate the entire conditional distribution of the target variable of a neural network via pinball loss. Here, we incorporate this technique into seismic inversion for purposes of CO2 monitoring. The uncertainty map is then calculated pixel by pixel from a particular prediction interval around the median. We also propose a novel data-augmentation method by sampling the uncertainty to further improve prediction accuracy. The developed methodology is tested on synthetic Kimberlina data, which are created by the Department of Energy and based on a CO2 capture and sequestration (CCS) project in California. The results prove that the proposed network can estimate the subsurface velocity rapidly and with sufficient resolution. Furthermore, the computed uncertainty quantifies the prediction accuracy. The method remains robust even if the testing data are distorted due to problems in the field data acquisition. Another test demonstrates the effectiveness of the developed data-augmentation method in increasing the spatial resolution of the estimated velocity field and in reducing the prediction error.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
335,484
1908.10122
Heuristic design of fuzzy inference systems: A review of three decades of research
This paper provides an in-depth review of the optimal design of type-1 and type-2 fuzzy inference systems (FIS) using five well known computational frameworks: genetic-fuzzy systems (GFS), neuro-fuzzy systems (NFS), hierarchical fuzzy systems (HFS), evolving fuzzy systems (EFS), and multi-objective fuzzy systems (MFS), which is in view that some of them are linked to each other. The heuristic design of GFS uses evolutionary algorithms for optimizing both Mamdani-type and Takagi-Sugeno-Kang-type fuzzy systems. Whereas, the NFS combines the FIS with neural network learning systems to improve the approximation ability. An HFS combines two or more low-dimensional fuzzy logic units in a hierarchical design to overcome the curse of dimensionality. An EFS solves the data streaming issues by evolving the system incrementally, and an MFS solves the multi-objective trade-offs like the simultaneous maximization of both interpretability and accuracy. This paper offers a synthesis of these dimensions and explores their potentials, challenges, and opportunities in FIS research. This review also examines the complex relations among these dimensions and the possibilities of combining one or more computational frameworks adding another dimension: deep fuzzy systems.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
143,026
1501.07439
The Arbitrarily Varying Wiretap Channel - Secret Randomness, Stability and Super-Activation
We define the common randomness assisted capacity of an arbitrarily varying channel (AVWC) when the Eavesdropper is kept ignorant about the common randomness. We prove a multi-letter capacity formula for this model. We prove that, if enough common randomness is used, the capacity formula can be given a single-shot form again. We then consider the opposite extremal case, where no common randomness is available. It is known that the capacity of the system can be discontinuous under these circumstances. We prove here that it is still stable in the sense that it is continuous around its positivity points. We further prove that discontinuities can only arise if the legal link is symmetrizable and characterize the points where it is positive. These results shed new light on the design principles of communication systems with embedded security features. At last we investigate the effect of super-activation of the message transmission capacity of AVWCs under the average error criterion. We give a complete characterization of those AVWCs that may be super-activated. The effect is thereby also related to the (conjectured) super-activation of the common randomness assisted capacity of AVWCs with an eavesdropper that gets to know the common randomness. Super-activation is based on the idea of "wasting" a few bits of non-secret messages in order to enable provably secret transmission of a large bulk of data, a concept that may prove to be of further importance in the design of communication systems. In this work we provide further insight into this phenomenon by providing a class of codes that is capacity-achieving and does not convey any information to the Eavesdropper.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
39,711
2008.01332
Real-Time Cleaning and Refinement of Facial Animation Signals
With the increasing demand for real-time animated 3D content in the entertainment industry and beyond, performance-based animation has garnered interest among both academic and industrial communities. While recent solutions for motion-capture animation have achieved impressive results, handmade post-processing is often needed, as the generated animations often contain artifacts. Existing real-time motion capture solutions have opted for standard signal processing methods to strengthen temporal coherence of the resulting animations and remove inaccuracies. While these methods produce smooth results, they inherently filter-out part of the dynamics of facial motion, such as high frequency transient movements. In this work, we propose a real-time animation refining system that preserves -- or even restores -- the natural dynamics of facial motions. To do so, we leverage an off-the-shelf recurrent neural network architecture that learns proper facial dynamics patterns on clean animation data. We parametrize our system using the temporal derivatives of the signal, enabling our network to process animations at any framerate. Qualitative results show that our system is able to retrieve natural motion signals from noisy or degraded input animation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
190,279
2412.14020
Landscape of AI safety concerns -- A methodology to support safety assurance for AI-based autonomous systems
Artificial Intelligence (AI) has emerged as a key technology, driving advancements across a range of applications. Its integration into modern autonomous systems requires assuring safety. However, the challenge of assuring safety in systems that incorporate AI components is substantial. The lack of concrete specifications, and also the complexity of both the operational environment and the system itself, leads to various aspects of uncertain behavior and complicates the derivation of convincing evidence for system safety. Nonetheless, scholars proposed to thoroughly analyze and mitigate AI-specific insufficiencies, so-called AI safety concerns, which yields essential evidence supporting a convincing assurance case. In this paper, we build upon this idea and propose the so-called Landscape of AI Safety Concerns, a novel methodology designed to support the creation of safety assurance cases for AI-based systems by systematically demonstrating the absence of AI safety concerns. The methodology's application is illustrated through a case study involving a driverless regional train, demonstrating its practicality and effectiveness.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
518,545
2407.20741
Improving PINNs By Algebraic Inclusion of Boundary and Initial Conditions
"AI for Science" aims to solve fundamental scientific problems using AI techniques. As most physical phenomena can be described as Partial Differential Equations (PDEs) , approximating their solutions using neural networks has evolved as a central component of scientific-ML. Physics-Informed Neural Networks (PINNs) is the general method that has evolved for this task but its training is well-known to be very unstable. In this work we explore the possibility of changing the model being trained from being just a neural network to being a non-linear transformation of it - one that algebraically includes the boundary/initial conditions. This reduces the number of terms in the loss function than the standard PINN losses. We demonstrate that our modification leads to significant performance gains across a range of benchmark tasks, in various dimensions and without having to tweak the training algorithm. Our conclusions are based on conducting hundreds of experiments, in the fully unsupervised setting, over multiple linear and non-linear PDEs set to exactly solvable scenarios, which lends to a concrete measurement of our performance gains in terms of order(s) of magnitude lower fractional errors being achieved, than by standard PINNs. The code accompanying this manuscript is publicly available at, https://github.com/MorganREN/Improving-PINNs-By-Algebraic-Inclusion-of-Boundary-and-Initial-Conditions
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
477,272
1202.0296
Error Performance of Multidimensional Lattice Constellations-Part I: A Parallelotope Geometry Based Approach for the AWGN Channel
Multidimensional lattice constellations which present signal space diversity (SSD) have been extensively studied for single-antenna transmission over fading channels, with focus on their optimal design for achieving high diversity gain. In this two-part series of papers we present a novel combinatorial geometrical approach based on parallelotope geometry, for the performance evaluation of multidimensional finite lattice constellations with arbitrary structure, dimension and rank. In Part I, we present an analytical expression for the exact symbol error probability (SEP) of multidimensional signal sets, and two novel closed-form bounds, named Multiple Sphere Lower Bound (MLSB) and Multiple Sphere Upper Bound (MSUB). Part II extends the analysis to the transmission over fading channels, where multidimensional signal sets are commonly used to combat fading degradation. Numerical and simulation results show that the proposed geometrical approach leads to accurate and tight expressions, which can be efficiently used for the performance evaluation and the design of multidimensional lattice constellations, both in Additive White Gaussian Noise (AWGN) and fading channels.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
14,060
1910.01849
Randomized Shortest Paths with Net Flows and Capacity Constraints
This work extends the randomized shortest paths (RSP) model by investigating the net flow RSP and adding capacity constraints on edge flows. The standard RSP is a model of movement, or spread, through a network interpolating between a random-walk and a shortest-path behavior [30, 42, 49]. The framework assumes a unit flow injected into a source node and collected from a target node with flows minimizing the expected transportation cost, together with a relative entropy regularization term. In this context, the present work first develops the net flow RSP model considering that edge flows in opposite directions neutralize each other (as in electric networks), and proposes an algorithm for computing the expected routing costs between all pairs of nodes. This quantity is called the net flow RSP dissimilarity measure between nodes. Experimental comparisons on node clustering tasks indicate that the net flow RSP dissimilarity is competitive with other state-of-the-art dissimilarities. In the second part of the paper, it is shown how to introduce capacity constraints on edge flows, and a procedure is developed to solve this constrained problem by exploiting Lagrangian duality. These two extensions should improve significantly the scope of applications of the RSP framework.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
148,056
2410.12586
How to Make LLMs Forget: On Reversing In-Context Knowledge Edits
In-context knowledge editing (IKE) enables efficient modification of large language model (LLM) outputs without parameter changes and at zero-cost. However, it can be misused to manipulate responses opaquely, e.g., insert misinformation or offensive content. Such malicious interventions could be incorporated into high-level wrapped APIs where the final input prompt is not shown to end-users. To address this issue, we investigate the detection and reversal of IKE-edits. First, we demonstrate that IKE-edits can be detected with high accuracy (F1 > 80\%) using only the top-10 output probabilities of the next token, even in a black-box setting, e.g. proprietary LLMs with limited output information. Further, we introduce the novel task of reversing IKE-edits using specially tuned reversal tokens. We explore using both continuous and discrete reversal tokens, achieving over 80\% accuracy in recovering original, unedited outputs across multiple LLMs. Our continuous reversal tokens prove particularly effective, with minimal impact on unedited prompts. Through analysis of output distributions, attention patterns, and token rankings, we provide insights into IKE's effects on LLMs and how reversal tokens mitigate them. This work represents a significant step towards enhancing LLM resilience against potential misuse of in-context editing, improving their transparency and trustworthiness.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
499,097
2312.01575
A Challenging Multimodal Video Summary: Simultaneously Extracting and Generating Keyframe-Caption Pairs from Video
This paper proposes a practical multimodal video summarization task setting and a dataset to train and evaluate the task. The target task involves summarizing a given video into a predefined number of keyframe-caption pairs and displaying them in a listable format to grasp the video content quickly. This task aims to extract crucial scenes from the video in the form of images (keyframes) and generate corresponding captions explaining each keyframe's situation. This task is useful as a practical application and presents a highly challenging problem worthy of study. Specifically, achieving simultaneous optimization of the keyframe selection performance and caption quality necessitates careful consideration of the mutual dependence on both preceding and subsequent keyframes and captions. To facilitate subsequent research in this field, we also construct a dataset by expanding upon existing datasets and propose an evaluation framework. Furthermore, we develop two baseline systems and report their respective performance.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
412,498
2501.01859
Deposition Rates in Thermal Laser Epitaxy: Simulation and Experiment
The modeling of deposition rates in Thermal Laser Epitaxy (TLE) is essential for the accurate prediction of the evaporation process and for improved dynamic process control. We demonstrate excellent agreement between experimental data and a model based on a finite element simulation that describes the temperature distribution of an elemental source when irradiated with continuous wave laser radiation. The simulation strongly depends on the thermophysical constants of the material, data of which is lacking for many elements. Effective values for the parameters may be determined with precision by means of an unambiguous reference provided by the melting point of the material, which is directly observed during the experiments. TLE may therefore be used to study the high temperature thermophysical and optical properties of the elements.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
522,244
cs/0610128
Hierarchical Bin Buffering: Online Local Moments for Dynamic External Memory Arrays
Local moments are used for local regression, to compute statistical measures such as sums, averages, and standard deviations, and to approximate probability distributions. We consider the case where the data source is a very large I/O array of size n and we want to compute the first N local moments, for some constant N. Without precomputation, this requires O(n) time. We develop a sequence of algorithms of increasing sophistication that use precomputation and additional buffer space to speed up queries. The simpler algorithms partition the I/O array into consecutive ranges called bins, and they are applicable not only to local-moment queries, but also to algebraic queries (MAX, AVERAGE, SUM, etc.). With N buffers of size sqrt{n}, time complexity drops to O(sqrt n). A more sophisticated approach uses hierarchical buffering and has a logarithmic time complexity (O(b log_b n)), when using N hierarchical buffers of size n/b. Using Overlapped Bin Buffering, we show that only a single buffer is needed, as with wavelet-based algorithms, but using much less storage. Applications exist in multidimensional and statistical databases over massive data sets, interactive image processing, and visualization.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
539,812
2303.06833
Transformer-based Planning for Symbolic Regression
Symbolic regression (SR) is a challenging task in machine learning that involves finding a mathematical expression for a function based on its values. Recent advancements in SR have demonstrated the effectiveness of pre-trained transformer-based models in generating equations as sequences, leveraging large-scale pre-training on synthetic datasets and offering notable advantages in terms of inference time over classical Genetic Programming (GP) methods. However, these models primarily rely on supervised pre-training goals borrowed from text generation and overlook equation discovery objectives like accuracy and complexity. To address this, we propose TPSR, a Transformer-based Planning strategy for Symbolic Regression that incorporates Monte Carlo Tree Search into the transformer decoding process. Unlike conventional decoding strategies, TPSR enables the integration of non-differentiable feedback, such as fitting accuracy and complexity, as external sources of knowledge into the transformer-based equation generation process. Extensive experiments on various datasets show that our approach outperforms state-of-the-art methods, enhancing the model's fitting-complexity trade-off, extrapolation abilities, and robustness to noise.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
351,005
1607.04311
Defensive Distillation is Not Robust to Adversarial Examples
We show that defensive distillation is not secure: it is no more resistant to targeted misclassification attacks than unprotected neural networks.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
58,596
2403.03053
Neural Codebook Design for Network Beam Management
Obtaining accurate and timely channel state information (CSI) is a fundamental challenge for large antenna systems. Mobile systems like 5G use a beam management framework that joins the initial access, beamforming, CSI acquisition, and data transmission. The design of codebooks for these stages, however, is challenging due to their interrelationships, varying array sizes, and site-specific channel and user distributions. Furthermore, beam management is often focused on single-sector operations while ignoring the overarching network- and system-level optimization. In this paper, we proposed an end-to-end learned codebook design algorithm, network beamspace learning (NBL), that captures and optimizes codebooks to mitigate interference while maximizing the achievable performance with extremely large hybrid arrays. The proposed algorithm requires limited shared information yet designs codebooks that outperform traditional codebooks by over 10dB in beam alignment and achieve more than 25% improvements in network spectral efficiency.
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
false
true
435,048
2109.13963
Smart at what cost? Characterising Mobile Deep Neural Networks in the wild
With smartphones' omnipresence in people's pockets, Machine Learning (ML) on mobile is gaining traction as devices become more powerful. With applications ranging from visual filters to voice assistants, intelligence on mobile comes in many forms and facets. However, Deep Neural Network (DNN) inference remains a compute intensive workload, with devices struggling to support intelligence at the cost of responsiveness.On the one hand, there is significant research on reducing model runtime requirements and supporting deployment on embedded devices. On the other hand, the strive to maximise the accuracy of a task is supported by deeper and wider neural networks, making mobile deployment of state-of-the-art DNNs a moving target. In this paper, we perform the first holistic study of DNN usage in the wild in an attempt to track deployed models and match how these run on widely deployed devices. To this end, we analyse over 16k of the most popular apps in the Google Play Store to characterise their DNN usage and performance across devices of different capabilities, both across tiers and generations. Simultaneously, we measure the models' energy footprint, as a core cost dimension of any mobile deployment. To streamline the process, we have developed gaugeNN, a tool that automates the deployment, measurement and analysis of DNNs on devices, with support for different frameworks and platforms. Results from our experience study paint the landscape of deep learning deployments on smartphones and indicate their popularity across app developers. Furthermore, our study shows the gap between bespoke techniques and real-world deployments and the need for optimised deployment of deep learning models in a highly dynamic and heterogeneous ecosystem.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
257,802
2502.06853
Native Fortran Implementation of TensorFlow-Trained Deep and Bayesian Neural Networks
Over the past decade, the investigation of machine learning (ML) within the field of nuclear engineering has grown significantly. With many approaches reaching maturity, the next phase of investigation will determine the feasibility and usefulness of ML model implementation in a production setting. Several of the codes used for reactor design and assessment are primarily written in the Fortran language, which is not immediately compatible with TensorFlow-trained ML models. This study presents a framework for implementing deep neural networks (DNNs) and Bayesian neural networks (BNNs) in Fortran, allowing for native execution without TensorFlow's C API, Python runtime, or ONNX conversion. Designed for ease of use and computational efficiency, the framework can be implemented in any Fortran code, supporting iterative solvers and UQ via ensembles or BNNs. Verification was performed using a two-input, one-output test case composed of a noisy sinusoid to compare Fortran-based predictions to those from TensorFlow. The DNN predictions showed negligible differences and achieved a 19.6x speedup, whereas the BNN predictions exhibited minor disagreement, plausibly due to differences in random number generation. An 8.0x speedup was noted for BNN inference. The approach was then further verified on a nuclear-relevant problem predicting critical heat flux (CHF), which demonstrated similar behavior along with significant computational gains. Discussion regarding the framework's successful integration into the CTF thermal-hydraulics code is also included, outlining its practical usefulness. Overall, this framework was shown to be effective at implementing both DNN and BNN model inference within Fortran, allowing for the continued study of ML-based methods in real-world nuclear applications.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
532,295
1712.09677
Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods
In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all of the above methods are equivalent. We prove global nonassymptotic linear convergence rates for all methods and various measures of success, including primal function values, primal iterates (in L2 sense), and dual function values. We also show that the primal iterates converge at an accelerated linear rate in the L1 sense. This is the first time a linear rate is shown for the stochastic heavy ball method (i.e., stochastic gradient descent method with momentum). Under somewhat weaker conditions, we establish a sublinear convergence rate for Cesaro averages of primal iterates. Moreover, we propose a novel concept, which we call stochastic momentum, aimed at decreasing the cost of performing the momentum step. We prove linear convergence of several stochastic methods with stochastic momentum, and show that in some sparse data regimes and for sufficiently small momentum parameters, these methods enjoy better overall complexity than methods with deterministic momentum. Finally, we perform extensive numerical testing on artificial and real datasets, including data coming from average consensus problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
87,388
1907.00377
FVA: Modeling Perceived Friendliness of Virtual Agents Using Movement Characteristics
We present a new approach for improving the friendliness and warmth of a virtual agent in an AR environment by generating appropriate movement characteristics. Our algorithm is based on a novel data-driven friendliness model that is computed using a user-study and psychological characteristics. We use our model to control the movements corresponding to the gaits, gestures, and gazing of friendly virtual agents (FVAs) as they interact with the user's avatar and other agents in the environment. We have integrated FVA agents with an AR environment using with a Microsoft HoloLens. Our algorithm can generate plausible movements at interactive rates to increase the social presence. We also investigate the perception of a user in an AR setting and observe that an FVA has a statistically significant improvement in terms of the perceived friendliness and social presence of a user compared to an agent without the friendliness modeling. We observe an increment of 5.71% in the mean responses to a friendliness measure and an improvement of 4.03% in the mean responses to a social presence measure.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
137,023
1506.02465
ASlib: A Benchmark Library for Algorithm Selection
The task of algorithm selection involves choosing an algorithm from a set of algorithms on a per-instance basis in order to exploit the varying performance of algorithms over a set of instances. The algorithm selection problem is attracting increasing attention from researchers and practitioners in AI. Years of fruitful applications in a number of domains have resulted in a large amount of data, but the community lacks a standard format or repository for this data. This situation makes it difficult to share and compare different approaches effectively, as is done in other, more established fields. It also unnecessarily hinders new researchers who want to work in this area. To address this problem, we introduce a standardized format for representing algorithm selection scenarios and a repository that contains a growing number of data sets from the literature. Our format has been designed to be able to express a wide variety of different scenarios. Demonstrating the breadth and power of our platform, we describe a set of example experiments that build and evaluate algorithm selection models through a common interface. The results display the potential of algorithm selection to achieve significant performance improvements across a broad range of problems and algorithms.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
43,925
2308.10146
OCHID-Fi: Occlusion-Robust Hand Pose Estimation in 3D via RF-Vision
Hand Pose Estimation (HPE) is crucial to many applications, but conventional cameras-based CM-HPE methods are completely subject to Line-of-Sight (LoS), as cameras cannot capture occluded objects. In this paper, we propose to exploit Radio-Frequency-Vision (RF-vision) capable of bypassing obstacles for achieving occluded HPE, and we introduce OCHID-Fi as the first RF-HPE method with 3D pose estimation capability. OCHID-Fi employs wideband RF sensors widely available on smart devices (e.g., iPhones) to probe 3D human hand pose and extract their skeletons behind obstacles. To overcome the challenge in labeling RF imaging given its human incomprehensible nature, OCHID-Fi employs a cross-modality and cross-domain training process. It uses a pre-trained CM-HPE network and a synchronized CM/RF dataset, to guide the training of its complex-valued RF-HPE network under LoS conditions. It further transfers knowledge learned from labeled LoS domain to unlabeled occluded domain via adversarial learning, enabling OCHID-Fi to generalize to unseen occluded scenarios. Experimental results demonstrate the superiority of OCHID-Fi: it achieves comparable accuracy to CM-HPE under normal conditions while maintaining such accuracy even in occluded scenarios, with empirical evidence for its generalizability to new domains.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
386,594
2212.05652
PyPop7: A Pure-Python Library for Population-Based Black-Box Optimization
In this paper, we present an open-source pure-Python library called PyPop7 for black-box optimization (BBO). As population-based methods (e.g., evolutionary algorithms, swarm intelligence, and pattern search) become increasingly popular for BBO, the design goal of PyPop7 is to provide a unified API and elegant implementations for them, particularly in challenging high-dimensional scenarios. Since these population-based methods easily suffer from the notorious curse of dimensionality owing to random sampling as one of core operations for most of them, recently various improvements and enhancements have been proposed to alleviate this issue more or less mainly via exploiting possible problem structures: such as, decomposition of search distribution or space, low-memory approximation, low-rank metric learning, variance reduction, ensemble of random subspaces, model self-adaptation, and fitness smoothing. These novel sampling strategies could better exploit different problem structures in high-dimensional search space and therefore they often result in faster rates of convergence and/or better qualities of solution for large-scale BBO. Now PyPop7 has covered many of these important advances on a set of well-established BBO algorithm families and also provided an open-access interface to adding the latest or missed black-box optimizers for further functionality extensions. Its well-designed source code (under GPL-3.0 license) and full-fledged online documents (under CC-BY 4.0 license) have been freely available at \url{https://github.com/Evolutionary-Intelligence/pypop} and \url{https://pypop.readthedocs.io}, respectively.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
335,842
2302.10179
A Dynamic Feedforward Control Strategy for Energy-efficient Building System Operation
The development of current building energy system operation has benefited from: 1. Informational support from the optimal design through simulation or first-principles models; 2. System load and energy prediction through machine learning (ML). Through the literature review, we note that in current control strategies and optimization algorithms, most of them rely on receiving information from real-time feedback or using only predictive signals based on ML data fitting. They do not fully utilize dynamic building information. In other words, embedding dynamic prior knowledge from building system characteristics simultaneously for system control draws less attention. In this context, we propose an engineer-friendly control strategy framework. The framework is integrated with a feedforward loop that embedded a dynamic building environment with leading and lagging system information involved: The simulation combined with system characteristic information is imported to the ML predictive algorithms. ML generates step-ahead information by rolling-window feed-in of simulation output to minimize the errors of its forecasting predecessor in a loop and achieve an overall optimal. We tested it in a case for heating system control with typical control strategies, which shows our framework owns a further energy-saving potential of 15%.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
346,699
2406.09617
Multimodal Large Language Models with Fusion Low Rank Adaptation for Device Directed Speech Detection
Although Large Language Models (LLMs) have shown promise for human-like conversations, they are primarily pre-trained on text data. Incorporating audio or video improves performance, but collecting large-scale multimodal data and pre-training multimodal LLMs is challenging. To this end, we propose a Fusion Low Rank Adaptation (FLoRA) technique that efficiently adapts a pre-trained unimodal LLM to consume new, previously unseen modalities via low rank adaptation. For device-directed speech detection, using FLoRA, the multimodal LLM achieves 22% relative reduction in equal error rate (EER) over the text-only approach and attains performance parity with its full fine-tuning (FFT) counterpart while needing to tune only a fraction of its parameters. Furthermore, with the newly introduced adapter dropout, FLoRA is robust to missing data, improving over FFT by 20% lower EER and 56% lower false accept rate. The proposed approach scales well for model sizes from 16M to 3B parameters.
true
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
464,008
2211.05543
Vis2Mus: Exploring Multimodal Representation Mapping for Controllable Music Generation
In this study, we explore the representation mapping from the domain of visual arts to the domain of music, with which we can use visual arts as an effective handle to control music generation. Unlike most studies in multimodal representation learning that are purely data-driven, we adopt an analysis-by-synthesis approach that combines deep music representation learning with user studies. Such an approach enables us to discover \textit{interpretable} representation mapping without a huge amount of paired data. In particular, we discover that visual-to-music mapping has a nice property similar to equivariant. In other words, we can use various image transformations, say, changing brightness, changing contrast, style transfer, to control the corresponding transformations in the music domain. In addition, we released the Vis2Mus system as a controllable interface for symbolic music generation.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
329,588
1904.05417
Unsupervised Deep Learning Algorithm for PDE-based Forward and Inverse Problems
We propose a neural network-based algorithm for solving forward and inverse problems for partial differential equations in unsupervised fashion. The solution is approximated by a deep neural network which is the minimizer of a cost function, and satisfies the PDE, boundary conditions, and additional regularizations. The method is mesh free and can be easily applied to an arbitrary regular domain. We focus on 2D second order elliptical system with non-constant coefficients, with application to Electrical Impedance Tomography.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
127,309
2206.13752
Sub-Block Rearranged Staircase Codes for Optical Transport Networks
We propose a new family of spatially coupled product codes, called sub-block rearranged staircase (SR-staircase) codes. Each SR-staircase code block is constructed by encoding rearranged preceding code blocks and new information blocks, where the rearrangement involves sub-blocks decomposition and transposition. The proposed codes can be constructed to have each code block size of $1/q$ to that of the conventional staircase codes while having the same rate and component codes, for any positive integer $q$. In this regard, we can use strong algebraic component codes to construct SR-staircase codes with a similar or the same code block size and rate as staircase codes with weak component codes. Moreover, both waterfall and error floor performance can be further improved by using a large coupling width. The superior performance of the proposed codes is demonstrated through density evolution and error floor analysis as well as simulation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
305,073
cmp-lg/9604005
Better Language Models with Model Merging
This paper investigates model merging, a technique for deriving Markov models from text or speech corpora. Models are derived by starting with a large and specific model and by successively combining states to build smaller and more general models. We present methods to reduce the time complexity of the algorithm and report on experiments on deriving language models for a speech recognition task. The experiments show the advantage of model merging over the standard bigram approach. The merged model assigns a lower perplexity to the test set and uses considerably fewer states.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,507
2103.16629
Learning Lipschitz Feedback Policies from Expert Demonstrations: Closed-Loop Guarantees, Generalization and Robustness
In this work, we propose a framework to learn feedback control policies with guarantees on closed-loop generalization and adversarial robustness. These policies are learned directly from expert demonstrations, contained in a dataset of state-control input pairs, without any prior knowledge of the task and system model. We use a Lipschitz-constrained loss minimization scheme to learn feedback policies with certified closed-loop robustness, wherein the Lipschitz constraint serves as a mechanism to tune the generalization performance and robustness to adversarial disturbances. Our analysis exploits the Lipschitz property to obtain closed-loop guarantees on generalization and robustness of the learned policies. In particular, we derive a finite sample bound on the policy learning error and establish robust closed-loop stability under the learned control policy. We also derive bounds on the closed-loop regret with respect to the expert policy and the deterioration of closed-loop performance under bounded (adversarial) disturbances to the state measurements. Numerical results validate our analysis and demonstrate the effectiveness of our robust feedback policy learning framework. Finally, our results suggest the existence of a potential tradeoff between nominal closed-loop performance and adversarial robustness, and that improvements in nominal closed-loop performance can only be made at the expense of robustness to adversarial perturbations.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
227,645
2305.17148
Differentially Private Low-dimensional Synthetic Data from High-dimensional Datasets
Differentially private synthetic data provide a powerful mechanism to enable data analysis while protecting sensitive information about individuals. However, when the data lie in a high-dimensional space, the accuracy of the synthetic data suffers from the curse of dimensionality. In this paper, we propose a differentially private algorithm to generate low-dimensional synthetic data efficiently from a high-dimensional dataset with a utility guarantee with respect to the Wasserstein distance. A key step of our algorithm is a private principal component analysis (PCA) procedure with a near-optimal accuracy bound that circumvents the curse of dimensionality. Unlike the standard perturbation analysis, our analysis of private PCA works without assuming the spectral gap for the covariance matrix.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
368,432
1504.03878
Optimization results for a generalized coupon collector problem
We study in this paper a generalized coupon collector problem, which consists in analyzing the time needed to collect a given number of distinct coupons that are drawn from a set of coupons with an arbitrary probability distribution. We suppose that a special coupon called the null coupon can be drawn but never belongs to any collection. In this context, we prove that the almost uniform distribution, for which all the non-null coupons have the same drawing probability, is the distribution which stochastically minimizes the time needed to collect a fixed number of distinct coupons. Moreover, we show that in a given closed subset of probability distributions, the distribution with all its entries, but one, equal to the smallest possible value is the one, which stochastically maximizes the time needed to collect a fixed number of distinct coupons. An computer science application shows the utility of these results.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
42,078
2402.09338
Neural Networks Asymptotic Behaviours for the Resolution of Inverse Problems
This paper presents a study of the effectiveness of Neural Network (NN) techniques for deconvolution inverse problems relevant for applications in Quantum Field Theory, but also in more general contexts. We consider NN's asymptotic limits, corresponding to Gaussian Processes (GPs), where non-linearities in the parameters of the NN can be neglected. Using these resulting GPs, we address the deconvolution inverse problem in the case of a quantum harmonic oscillator simulated through Monte Carlo techniques on a lattice. In this simple toy model, the results of the inversion can be compared with the known analytical solution. Our findings indicate that solving the inverse problem with a NN yields less performing results than those obtained using the GPs derived from NN's asymptotic limits. Furthermore, we observe the trained NN's accuracy approaching that of GPs with increasing layer width. Notably, one of these GPs defies interpretation as a probabilistic model, offering a novel perspective compared to established methods in the literature. Our results suggest the need for detailed studies of the training dynamics in more realistic set-ups.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
429,474
1803.07512
Fusion of stereo and still monocular depth estimates in a self-supervised learning context
We study how autonomous robots can learn by themselves to improve their depth estimation capability. In particular, we investigate a self-supervised learning setup in which stereo vision depth estimates serve as targets for a convolutional neural network (CNN) that transforms a single still image to a dense depth map. After training, the stereo and mono estimates are fused with a novel fusion method that preserves high confidence stereo estimates, while leveraging the CNN estimates in the low-confidence regions. The main contribution of the article is that it is shown that the fused estimates lead to a higher performance than the stereo vision estimates alone. Experiments are performed on the KITTI dataset, and on board of a Parrot SLAMDunk, showing that even rather limited CNNs can help provide stereo vision equipped robots with more reliable depth maps for autonomous navigation.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
93,070
1809.10272
Multi-variate correlation and mixtures of product measures
Total correlation (`TC') and dual total correlation (`DTC') are two classical ways to quantify the correlation among an $n$-tuple of random variables. They both reduce to mutual information when $n=2$. The first part of this paper sets up the theory of TC and DTC for general random variables, not necessarily finite-valued. This generality has not been exposed in the literature before. The second part considers the structural implications when a joint distribution $\mu$ has small TC or DTC. If $\mathrm{TC}(\mu) = o(n)$, then $\mu$ is close to a product measure according to a suitable transportation metric: this follows directly from Marton's classical transportation-entropy inequality. If $\mathrm{DTC}(\mu) = o(n)$, then the structural consequence is more complicated: $\mu$ is a mixture of a controlled number of terms, most of them close to product measures in the transportation metric. This is the main new result of the paper.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
108,872
2405.14176
Certified Robustness against Sparse Adversarial Perturbations via Data Localization
Recent work in adversarial robustness suggests that natural data distributions are localized, i.e., they place high probability in small volume regions of the input space, and that this property can be utilized for designing classifiers with improved robustness guarantees for $\ell_2$-bounded perturbations. Yet, it is still unclear if this observation holds true for more general metrics. In this work, we extend this theory to $\ell_0$-bounded adversarial perturbations, where the attacker can modify a few pixels of the image but is unrestricted in the magnitude of perturbation, and we show necessary and sufficient conditions for the existence of $\ell_0$-robust classifiers. Theoretical certification approaches in this regime essentially employ voting over a large ensemble of classifiers. Such procedures are combinatorial and expensive or require complicated certification techniques. In contrast, a simple classifier emerges from our theory, dubbed Box-NN, which naturally incorporates the geometry of the problem and improves upon the current state-of-the-art in certified robustness against sparse attacks for the MNIST and Fashion-MNIST datasets.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
456,292
1012.2621
Throughput and Latency of Acyclic Erasure Networks with Feedback in a Finite Buffer Regime
The exact Markov modeling analysis of erasure networks with finite buffers is an extremely hard problem due to the large number of states in the system. In such networks, packets are lost due to either link erasures or blocking by the full buffers. In this paper, we propose a novel method that iteratively estimates the performance parameters of the network and more importantly reduces the computational complexity compared to the exact analysis. This is the first work that analytically studies the effect of finite memory on the throughput and latency in general wired acyclic networks with erasure links. As a case study, a random packet routing scheme with ideal feedback on the links is used. The proposed framework yields a fairly accurate estimate of the probability distribution of buffer occupancies at the intermediate nodes using which we can not only identify the congested and starving nodes but also obtain analytical expressions for throughput and average delay of a packet in the network. The theoretical framework presented here can be applied to many wired networks, from Internet to more futuristic applications such as networks-on-chip under various communication and network coding scenarios.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
8,512
2109.13527
Concept-Aware Denoising Graph Neural Network for Micro-Video Recommendation
Recently, micro-video sharing platforms such as Kuaishou and Tiktok have become a major source of information for people's lives. Thanks to the large traffic volume, short video lifespan and streaming fashion of these services, it has become more and more pressing to improve the existing recommender systems to accommodate these challenges in a cost-effective way. In this paper, we propose a novel concept-aware denoising graph neural network (named CONDE) for micro-video recommendation. CONDE consists of a three-phase graph convolution process to derive user and micro-video representations: warm-up propagation, graph denoising and preference refinement. A heterogeneous tripartite graph is constructed by connecting user nodes with video nodes, and video nodes with associated concept nodes, extracted from captions and comments of the videos. To address the noisy information in the graph, we introduce a user-oriented graph denoising phase to extract a subgraph which can better reflect the user's preference. Despite the main focus of micro-video recommendation in this paper, we also show that our method can be generalized to other types of tasks. Therefore, we also conduct empirical studies on a well-known public E-commerce dataset. The experimental results suggest that the proposed CONDE achieves significantly better recommendation performance than the existing state-of-the-art solutions.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
257,665
1001.1133
Multi-cell MIMO Downlink with Fairness Criteria: the Large System Limit
We consider the downlink of a cellular network with multiple cells and multi-antenna base stations including arbitrary inter-cell cooperation, realistic distance-dependent pathloss and general "fairness" requirements. Beyond Monte Carlo simulation, no efficient computation method to evaluate the ergodic throughput of such systems has been provided so far. We propose an analytic method based on the combination of the large random matrix theory with Lagrangian optimization. The proposed method is computationally much more efficient than Monte Carlo simulation and provides a very accurate approximation (almost indistinguishable) for the actual finite-dimensional case, even for of a small number of users and base station antennas. Numerical examples include linear 2-cell and planar three-sectored 7-cell layouts, with no inter-cell cooperation, sector cooperation, and full cooperation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
5,287
2307.05898
Rectifying Noisy Labels with Sequential Prior: Multi-Scale Temporal Feature Affinity Learning for Robust Video Segmentation
Noisy label problems are inevitably in existence within medical image segmentation causing severe performance degradation. Previous segmentation methods for noisy label problems only utilize a single image while the potential of leveraging the correlation between images has been overlooked. Especially for video segmentation, adjacent frames contain rich contextual information beneficial in cognizing noisy labels. Based on two insights, we propose a Multi-Scale Temporal Feature Affinity Learning (MS-TFAL) framework to resolve noisy-labeled medical video segmentation issues. First, we argue the sequential prior of videos is an effective reference, i.e., pixel-level features from adjacent frames are close in distance for the same class and far in distance otherwise. Therefore, Temporal Feature Affinity Learning (TFAL) is devised to indicate possible noisy labels by evaluating the affinity between pixels in two adjacent frames. We also notice that the noise distribution exhibits considerable variations across video, image, and pixel levels. In this way, we introduce Multi-Scale Supervision (MSS) to supervise the network from three different perspectives by re-weighting and refining the samples. This design enables the network to concentrate on clean samples in a coarse-to-fine manner. Experiments with both synthetic and real-world label noise demonstrate that our method outperforms recent state-of-the-art robust segmentation approaches. Code is available at https://github.com/BeileiCui/MS-TFAL.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
378,892
2206.05278
Dual-Branch Squeeze-Fusion-Excitation Module for Cross-Modality Registration of Cardiac SPECT and CT
Single-photon emission computed tomography (SPECT) is a widely applied imaging approach for diagnosis of coronary artery diseases. Attenuation maps (u-maps) derived from computed tomography (CT) are utilized for attenuation correction (AC) to improve diagnostic accuracy of cardiac SPECT. However, SPECT and CT are obtained sequentially in clinical practice, which potentially induces misregistration between the two scans. Convolutional neural networks (CNN) are powerful tools for medical image registration. Previous CNN-based methods for cross-modality registration either directly concatenated two input modalities as an early feature fusion or extracted image features using two separate CNN modules for a late fusion. These methods do not fully extract or fuse the cross-modality information. Besides, deep-learning-based rigid registration of cardiac SPECT and CT-derived u-maps has not been investigated before. In this paper, we propose a Dual-Branch Squeeze-Fusion-Excitation (DuSFE) module for the registration of cardiac SPECT and CT-derived u-maps. DuSFE fuses the knowledge from multiple modalities to recalibrate both channel-wise and spatial features for each modality. DuSFE can be embedded at multiple convolutional layers to enable feature fusion at different spatial dimensions. Our studies using clinical data demonstrated that a network embedded with DuSFE generated substantial lower registration errors and therefore more accurate AC SPECT images than previous methods.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
301,953
2310.17315
Nabra: Syrian Arabic Dialects with Morphological Annotations
This paper presents Nabra, a corpora of Syrian Arabic dialects with morphological annotations. A team of Syrian natives collected more than 6K sentences containing about 60K words from several sources including social media posts, scripts of movies and series, lyrics of songs and local proverbs to build Nabra. Nabra covers several local Syrian dialects including those of Aleppo, Damascus, Deir-ezzur, Hama, Homs, Huran, Latakia, Mardin, Raqqah, and Suwayda. A team of nine annotators annotated the 60K tokens with full morphological annotations across sentence contexts. We trained the annotators to follow methodological annotation guidelines to ensure unique morpheme annotations, and normalized the annotations. F1 and kappa agreement scores ranged between 74% and 98% across features, showing the excellent quality of Nabra annotations. Our corpora are open-source and publicly available as part of the Currasat portal https://sina.birzeit.edu/currasat.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
403,081
1710.02543
Socially Compliant Navigation through Raw Depth Inputs with Generative Adversarial Imitation Learning
We present an approach for mobile robots to learn to navigate in dynamic environments with pedestrians via raw depth inputs, in a socially compliant manner. To achieve this, we adopt a generative adversarial imitation learning (GAIL) strategy, which improves upon a pre-trained behavior cloning policy. Our approach overcomes the disadvantages of previous methods, as they heavily depend on the full knowledge of the location and velocity information of nearby pedestrians, which not only requires specific sensors, but also the extraction of such state information from raw sensory input could consume much computation time. In this paper, our proposed GAIL-based model performs directly on raw depth inputs and plans in real-time. Experiments show that our GAIL-based approach greatly improves the safety and efficiency of the behavior of mobile robots from pure behavior cloning. The real-world deployment also shows that our method is capable of guiding autonomous vehicles to navigate in a socially compliant manner directly through raw depth inputs. In addition, we release a simulation plugin for modeling pedestrian behaviors based on the social force model.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
82,180
2308.08493
Time Travel in LLMs: Tracing Data Contamination in Large Language Models
Data contamination, i.e., the presence of test data from downstream tasks in the training data of large language models (LLMs), is a potential major issue in measuring LLMs' real effectiveness on other tasks. We propose a straightforward yet effective method for identifying data contamination within LLMs. At its core, our approach starts by identifying potential contamination at the instance level; using this information, our approach then assesses wider contamination at the partition level. To estimate contamination of individual instances, we employ "guided instruction:" a prompt consisting of the dataset name, partition type, and the random-length initial segment of a reference instance, asking the LLM to complete it. An instance is flagged as contaminated if the LLM's output either exactly or nearly matches the latter segment of the reference. To understand if an entire partition is contaminated, we propose two ideas. The first idea marks a dataset partition as contaminated if the average overlap score with the reference instances (as measured by ROUGE-L or BLEURT) is statistically significantly better with the completions from guided instruction compared to a "general instruction" that does not include the dataset and partition name. The second idea marks a dataset partition as contaminated if a classifier based on GPT-4 with few-shot in-context learning prompt marks multiple generated completions as exact/near-exact matches of the corresponding reference instances. Our best method achieves an accuracy between 92% and 100% in detecting if an LLM is contaminated with seven datasets, containing train and test/validation partitions, when contrasted with manual evaluation by human experts. Further, our findings indicate that GPT-4 is contaminated with AG News, WNLI, and XSum datasets.
false
false
false
false
true
false
true
false
true
false
false
false
true
false
false
false
false
false
385,925
2202.12515
Faithful learning with sure data for lung nodule diagnosis
Recent evolution in deep learning has proven its value for CT-based lung nodule classification. Most current techniques are intrinsically black-box systems, suffering from two generalizability issues in clinical practice. First, benign-malignant discrimination is often assessed by human observers without pathologic diagnoses at the nodule level. We termed these data as "unsure data". Second, a classifier does not necessarily acquire reliable nodule features for stable learning and robust prediction with patch-level labels during learning. In this study, we construct a sure dataset with pathologically-confirmed labels and propose a collaborative learning framework to facilitate sure nodule classification by integrating unsure data knowledge through nodule segmentation and malignancy score regression. A loss function is designed to learn reliable features by introducing interpretability constraints regulated with nodule segmentation maps. Furthermore, based on model inference results that reflect the understanding from both machine and experts, we explore a new nodule analysis method for similar historical nodule retrieval and interpretable diagnosis. Detailed experimental results demonstrate that our approach is beneficial for achieving improved performance coupled with faithful model reasoning for lung cancer prediction. Extensive cross-evaluation results further illustrate the effect of unsure data for deep-learning-based methods in lung nodule classification.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
282,272
2411.13820
InstCache: A Predictive Cache for LLM Serving
Large language models are revolutionizing every aspect of human life. However, the unprecedented power comes at the cost of significant computing intensity, suggesting long latency and large energy footprint. Key-Value Cache and Semantic Cache have been proposed as a solution to the above problem, but both suffer from limited scalability due to significant memory cost for each token or instruction embeddings. Motivated by the observations that most instructions are short, repetitive and predictable by LLMs, we propose to predict user-instructions by an instruction-aligned LLM and store them in a predictive cache, so-called InstCache. We introduce an instruction pre-population algorithm based on the negative log likelihood of instructions, determining the cache size with regard to the hit rate. The proposed InstCache is efficiently implemented as a hash table with minimal lookup latency for deployment. Experimental results show that InstCache can achieve up to 51.34% hit rate on LMSys dataset, which corresponds to a 2x speedup, at a memory cost of only 4.5GB.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
509,930
2305.15072
PathAsst: A Generative Foundation AI Assistant Towards Artificial General Intelligence of Pathology
As advances in large language models (LLMs) and multimodal techniques continue to mature, the development of general-purpose multimodal large language models (MLLMs) has surged, offering significant applications in interpreting natural images. However, the field of pathology has largely remained untapped, particularly in gathering high-quality data and designing comprehensive model frameworks. To bridge the gap in pathology MLLMs, we present PathAsst, a multimodal generative foundation AI assistant to revolutionize diagnostic and predictive analytics in pathology. The development of PathAsst involves three pivotal steps: data acquisition, CLIP model adaptation, and the training of PathAsst's multimodal generative capabilities. Firstly, we collect over 207K high-quality pathology image-text pairs from authoritative sources. Leveraging the advanced power of ChatGPT, we generate over 180K instruction-following samples. Furthermore, we devise additional instruction-following data specifically tailored for invoking eight pathology-specific sub-models we prepared, allowing the PathAsst to effectively collaborate with these models, enhancing its diagnostic ability. Secondly, by leveraging the collected data, we construct PathCLIP, a pathology-dedicated CLIP, to enhance PathAsst's capabilities in interpreting pathology images. Finally, we integrate PathCLIP with the Vicuna-13b and utilize pathology-specific instruction-tuning data to enhance the multimodal generation capacity of PathAsst and bolster its synergistic interactions with sub-models. The experimental results of PathAsst show the potential of harnessing AI-powered generative foundation model to improve pathology diagnosis and treatment processes.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
367,436
2202.13840
Text Smoothing: Enhance Various Data Augmentation Methods on Text Classification Tasks
Before entering the neural network, a token is generally converted to the corresponding one-hot representation, which is a discrete distribution of the vocabulary. Smoothed representation is the probability of candidate tokens obtained from a pre-trained masked language model, which can be seen as a more informative substitution to the one-hot representation. We propose an efficient data augmentation method, termed text smoothing, by converting a sentence from its one-hot representation to a controllable smoothed representation. We evaluate text smoothing on different benchmarks in a low-resource regime. Experimental results show that text smoothing outperforms various mainstream data augmentation methods by a substantial margin. Moreover, text smoothing can be combined with those data augmentation methods to achieve better performance.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
282,762