id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1905.07790
Correlation Coefficients and Semantic Textual Similarity
A large body of research into semantic textual similarity has focused on constructing state-of-the-art embeddings using sophisticated modelling, careful choice of learning signals and many clever tricks. By contrast, little attention has been devoted to similarity measures between these embeddings, with cosine similarity being used unquestionably in the majority of cases. In this work, we illustrate that for all common word vectors, cosine similarity is essentially equivalent to the Pearson correlation coefficient, which provides some justification for its use. We thoroughly characterise cases where Pearson correlation (and thus cosine similarity) is unfit as similarity measure. Importantly, we show that Pearson correlation is appropriate for some word vectors but not others. When it is not appropriate, we illustrate how common non-parametric rank correlation coefficients can be used instead to significantly improve performance. We support our analysis with a series of evaluations on word-level and sentence-level semantic textual similarity benchmarks. On the latter, we show that even the simplest averaged word vectors compared by rank correlation easily rival the strongest deep representations compared by cosine similarity.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
131,325
2408.02795
Entity Retrieval for Answering Entity-Centric Questions
The similarity between the question and indexed documents is a crucial factor in document retrieval for retrieval-augmented question answering. Although this is typically the only method for obtaining the relevant documents, it is not the sole approach when dealing with entity-centric questions. In this study, we propose Entity Retrieval, a novel retrieval method which rather than relying on question-document similarity, depends on the salient entities within the question to identify the retrieval documents. We conduct an in-depth analysis of the performance of both dense and sparse retrieval methods in comparison to Entity Retrieval. Our findings reveal that our method not only leads to more accurate answers to entity-centric questions but also operates more efficiently.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
478,765
2102.03355
Thermal Control of Laser Powder Bed Fusion Using Deep Reinforcement Learning
Powder-based additive manufacturing techniques provide tools to construct intricate structures that are difficult to manufacture using conventional methods. In Laser Powder Bed Fusion, components are built by selectively melting specific areas of the powder bed, to form the two-dimensional cross-section of the specific part. However, the high occurrence of defects impacts the adoption of this method for precision applications. Therefore, a control policy for dynamically altering process parameters to avoid phenomena that lead to defect occurrences is necessary. A Deep Reinforcement Learning (DRL) framework that derives a versatile control strategy for minimizing the likelihood of these defects is presented. The generated control policy alters the velocity of the laser during the melting process to ensure the consistency of the melt pool and reduce overheating in the generated product. The control policy is trained and validated on efficient simulations of the continuum temperature distribution of the powder bed layer under various laser trajectories.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
218,719
2306.11518
One model to rule them all: ranking Slovene summarizers
Text summarization is an essential task in natural language processing, and researchers have developed various approaches over the years, ranging from rule-based systems to neural networks. However, there is no single model or approach that performs well on every type of text. We propose a system that recommends the most suitable summarization model for a given text. The proposed system employs a fully connected neural network that analyzes the input content and predicts which summarizer should score the best in terms of ROUGE score for a given input. The meta-model selects among four different summarization models, developed for the Slovene language, using different properties of the input, in particular its Doc2Vec document representation. The four Slovene summarization models deal with different challenges associated with text summarization in a less-resourced language. We evaluate the proposed SloMetaSum model performance automatically and parts of it manually. The results show that the system successfully automates the step of manually selecting the best model.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
374,623
2307.00895
Synthesis of Contrast-Enhanced Breast MRI Using Multi-b-Value DWI-based Hierarchical Fusion Network with Attention Mechanism
Magnetic resonance imaging (MRI) is the most sensitive technique for breast cancer detection among current clinical imaging modalities. Contrast-enhanced MRI (CE-MRI) provides superior differentiation between tumors and invaded healthy tissue, and has become an indispensable technique in the detection and evaluation of cancer. However, the use of gadolinium-based contrast agents (GBCA) to obtain CE-MRI may be associated with nephrogenic systemic fibrosis and may lead to bioaccumulation in the brain, posing a potential risk to human health. Moreover, and likely more important, the use of gadolinium-based contrast agents requires the cannulation of a vein, and the injection of the contrast media which is cumbersome and places a burden on the patient. To reduce the use of contrast agents, diffusion-weighted imaging (DWI) is emerging as a key imaging technique, although currently usually complementing breast CE-MRI. In this study, we develop a multi-sequence fusion network to synthesize CE-MRI based on T1-weighted MRI and DWIs. DWIs with different b-values are fused to efficiently utilize the difference features of DWIs. Rather than proposing a pure data-driven approach, we invent a multi-sequence attention module to obtain refined feature maps, and leverage hierarchical representation information fused at different scales while utilizing the contributions from different sequences from a model-driven approach by introducing the weighted difference module. The results show that the multi-b-value DWI-based fusion model can potentially be used to synthesize CE-MRI, thus theoretically reducing or avoiding the use of GBCA, thereby minimizing the burden to patients. Our code is available at \url{https://github.com/Netherlands-Cancer-Institute/CE-MRI}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
377,169
1805.12152
Robustness May Be at Odds with Accuracy
We show that there may exist an inherent tension between the goal of adversarial robustness and that of standard generalization. Specifically, training robust models may not only be more resource-consuming, but also lead to a reduction of standard accuracy. We demonstrate that this trade-off between the standard accuracy of a model and its robustness to adversarial perturbations provably exists in a fairly simple and natural setting. These findings also corroborate a similar phenomenon observed empirically in more complex settings. Further, we argue that this phenomenon is a consequence of robust classifiers learning fundamentally different feature representations than standard classifiers. These differences, in particular, seem to result in unexpected benefits: the representations learned by robust models tend to align better with salient data characteristics and human perception.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
true
false
false
99,101
2004.12247
Hierarchical Multi Task Learning with Subword Contextual Embeddings for Languages with Rich Morphology
Morphological information is important for many sequence labeling tasks in Natural Language Processing (NLP). Yet, existing approaches rely heavily on manual annotations or external software to capture this information. In this study, we propose using subword contextual embeddings to capture the morphological information for languages with rich morphology. In addition, we incorporate these embeddings in a hierarchical multi-task setting which is not employed before, to the best of our knowledge. Evaluated on Dependency Parsing (DEP) and Named Entity Recognition (NER) tasks, which are shown to benefit greatly from morphological information, our final model outperforms previous state-of-the-art models on both tasks for the Turkish language. Besides, we show a net improvement of 18.86% and 4.61% F-1 over the previously proposed multi-task learner in the same setting for the DEP and the NER tasks, respectively. Empirical results for five different MTL settings show that incorporating subword contextual embeddings brings significant improvements for both tasks. In addition, we observed that multi-task learning consistently improves the performance of the DEP component.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
174,180
2008.05708
Robust Image Matching By Dynamic Feature Selection
Estimating dense correspondences between images is a long-standing image under-standing task. Recent works introduce convolutional neural networks (CNNs) to extract high-level feature maps and find correspondences through feature matching. However,high-level feature maps are in low spatial resolution and therefore insufficient to provide accurate and fine-grained features to distinguish intra-class variations for correspondence matching. To address this problem, we generate robust features by dynamically selecting features at different scales. To resolve two critical issues in feature selection,i.e.,how many and which scales of features to be selected, we frame the feature selection process as a sequential Markov decision-making process (MDP) and introduce an optimal selection strategy using reinforcement learning (RL). We define an RL environment for image matching in which each individual action either requires new features or terminates the selection episode by referring a matching score. Deep neural networks are incorporated into our method and trained for decision making. Experimental results show that our method achieves comparable/superior performance with state-of-the-art methods on three benchmarks, demonstrating the effectiveness of our feature selection strategy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
191,587
1612.00402
Reduced Order Models for Pricing European and American Options under Stochastic Volatility and Jump-Diffusion Models
European options can be priced by solving parabolic partial(-integro) differential equations under stochastic volatility and jump-diffusion models like Heston, Merton, and Bates models. American option prices can be obtained by solving linear complementary problems (LCPs) with the same operators. A finite difference discretization leads to a so-called full order model (FOM). Reduced order models (ROMs) are derived employing proper orthogonal decomposition (POD). The early exercise constraint of American options is enforced by a penalty on subset of grid points. The presented numerical experiments demonstrate that pricing with ROMs can be orders of magnitude faster within a given model parameter variation range.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
64,878
1904.01537
Speech denoising by parametric resynthesis
This work proposes the use of clean speech vocoder parameters as the target for a neural network performing speech enhancement. These parameters have been designed for text-to-speech synthesis so that they both produce high-quality resyntheses and also are straightforward to model with neural networks, but have not been utilized in speech enhancement until now. In comparison to a matched text-to-speech system that is given the ground truth transcripts of the noisy speech, our model is able to produce more natural speech because it has access to the true prosody in the noisy speech. In comparison to two denoising systems, the oracle Wiener mask and a DNN-based mask predictor, our model equals the oracle Wiener mask in subjective quality and intelligibility and surpasses the realistic system. A vocoder-based upper bound shows that there is still room for improvement with this approach beyond the oracle Wiener mask. We test speaker-dependence with two speakers and show that a single model can be used for multiple speakers.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
126,163
2404.13515
FedTrans: Efficient Federated Learning via Multi-Model Transformation
Federated learning (FL) aims to train machine learning (ML) models across potentially millions of edge client devices. Yet, training and customizing models for FL clients is notoriously challenging due to the heterogeneity of client data, device capabilities, and the massive scale of clients, making individualized model exploration prohibitively expensive. State-of-the-art FL solutions personalize a globally trained model or concurrently train multiple models, but they often incur suboptimal model accuracy and huge training costs. In this paper, we introduce FedTrans, a multi-model FL training framework that automatically produces and trains high-accuracy, hardware-compatible models for individual clients at scale. FedTrans begins with a basic global model, identifies accuracy bottlenecks in model architectures during training, and then employs model transformation to derive new models for heterogeneous clients on the fly. It judiciously assigns models to individual clients while performing soft aggregation on multi-model updates to minimize total training costs. Our evaluations using realistic settings show that FedTrans improves individual client model accuracy by 14% - 72% while slashing training costs by 1.6X - 20X over state-of-the-art solutions.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
448,327
1804.07354
The Aftermath of Disbanding an Online Hateful Community
Harassing and hateful speech in online spaces has become a common problem for platform maintainers and their users. The toxicity created by such content can discourage user participation and engagement. Therefore, it is crucial for and a common goal of platform managers to diminish hateful and harmful content. Over the last year, Reddit, a major online platform, enacted a policy of banning sub-communities (subreddits) that they deem harassing, with the goal of diminishing such activities. We studied the effects of banning the largest hateful subreddit (r/fatpeoplehate or FPH) on the users and other subreddits that were associated with it. We found that, while a number of outcomes were possible --- in this case the subreddit ban led to a sustained reduced interaction of its members (FPH users) with the Reddit platform. We also found that the many counter-actions taken by FPH users were short-lived and promptly neutralized by both Reddit administrators and the admins of individual subreddits. Our findings show that forum-banning can be an effective means by which to diminish objectionable content. Moreover, our detailed analysis of the post-banning behavior of FPH users highlights a number of the behavioral patterns that banning can create.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
95,510
2007.06709
Deep Image Orientation Angle Detection
Estimating and rectifying the orientation angle of any image is a pretty challenging task. Initial work used the hand engineering features for this purpose, where after the invention of deep learning using convolution-based neural network showed significant improvement in this problem. However, this paper shows that the combination of CNN and a custom loss function specially designed for angles lead to a state-of-the-art results. This includes the estimation of the orientation angle of any image or document at any degree (0 to 360 degree),
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
187,097
2309.08915
On non-expandable cross-bifix-free codes
A cross-bifix-free code of length $n$ over $\mathbb{Z}_q$ is defined as a non-empty subset of $\mathbb{Z}_q^n$ satisfying that the prefix set of each codeword is disjoint from the suffix set of every codeword. Cross-bifix-free codes have found important applications in digital communication systems. One of the main research problems on cross-bifix-free codes is to construct cross-bifix-free codes as large as possible in size. Recently, Wang and Wang introduced a family of cross-bifix-free codes $S_{I,J}^{(k)}(n)$, which is a generalization of the classical cross-bifix-free codes studied early by Lvenshtein, Gilbert and Chee {\it et al.}. It is known that $S_{I,J}^{(k)}(n)$ is nearly optimal in size and $S_{I,J}^{(k)}(n)$ is non-expandable if $k=n-1$ or $1\leq k<n/2$. In this paper, we first show that $S_{I,J}^{(k)}(n)$ is non-expandable if and only if $k=n-1$ or $1\leq k<n/2$, thereby improving the results in [Chee {\it et al.}, IEEE-TIT, 2013] and [Wang and Wang, IEEE-TIT, 2022]. We then construct a new family of cross-bifix-free codes $U^{(t)}_{I,J}(n)$ to expand $S_{I,J}^{(k)}(n)$ such that the resulting larger code $S_{I,J}^{(k)}(n)\bigcup U^{(t)}_{I,J}(n)$ is a non-expandable cross-bifix-free code whenever $S_{I,J}^{(k)}(n)$ is expandable. Finally, we present an explicit formula for the size of $S_{I,J}^{(k)}(n)\bigcup U^{(t)}_{I,J}(n)$.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
392,382
2409.01930
Efficient LLM Context Distillation
This paper specifically investigates context distillation a method that extends the utility of task-specific examples by internalizing them, thus augmenting the example set accessible for model inference.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
485,511
2012.11094
Complexity of zigzag sampling algorithm for strongly log-concave distributions
We study the computational complexity of zigzag sampling algorithm for strongly log-concave distributions. The zigzag process has the advantage of not requiring time discretization for implementation, and that each proposed bouncing event requires only one evaluation of partial derivative of the potential, while its convergence rate is dimension independent. Using these properties, we prove that the zigzag sampling algorithm achieves $\varepsilon$ error in chi-square divergence with a computational cost equivalent to $O\bigl(\kappa^2 d^\frac{1}{2}(\log\frac{1}{\varepsilon})^{\frac{3}{2}}\bigr)$ gradient evaluations in the regime $\kappa \ll \frac{d}{\log d}$ under a warm start assumption, where $\kappa$ is the condition number and $d$ is the dimension.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
212,525
2410.01336
VectorGraphNET: Graph Attention Networks for Accurate Segmentation of Complex Technical Drawings
This paper introduces a new approach to extract and analyze vector data from technical drawings in PDF format. Our method involves converting PDF files into SVG format and creating a feature-rich graph representation, which captures the relationships between vector entities using geometrical information. We then apply a graph attention transformer with hierarchical label definition to achieve accurate line-level segmentation. Our approach is evaluated on two datasets, including the public FloorplanCAD dataset, which achieves state-of-the-art results on weighted F1 score, surpassing existing methods. The proposed vector-based method offers a more scalable solution for large-scale technical drawing analysis compared to vision-based approaches, while also requiring significantly less GPU power than current state-of-the-art vector-based techniques. Moreover, it demonstrates improved performance in terms of the weighted F1 (wF1) score on the semantic segmentation task. Our results demonstrate the effectiveness of our approach in extracting meaningful information from technical drawings, enabling new applications, and improving existing workflows in the AEC industry. Potential applications of our approach include automated building information modeling (BIM) and construction planning, which could significantly impact the efficiency and productivity of the industry.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
493,724
2310.01885
Synthetic CT Generation via Variant Invertible Network for All-digital Brain PET Attenuation Correction
Attenuation correction (AC) is essential for the generation of artifact-free and quantitatively accurate positron emission tomography (PET) images. However, AC of PET faces challenges including inter-scan motion and erroneous transformation of structural voxel-intensities to PET attenuation-correction factors. Nowadays, the problem of AC for quantitative PET have been solved to a large extent after the commercial availability of devices combining PET with computed tomography (CT). Meanwhile, considering the feasibility of a deep learning approach for PET AC without anatomical imaging, this paper develops a PET AC method, which uses deep learning to generate continuously valued CT images from non-attenuation corrected PET images for AC on brain PET imaging. Specifically, an invertible network combined with the variable augmentation strategy that can achieve the bidirectional inference processes is proposed for synthetic CT generation (IVNAC). To evaluate the performance of the proposed algorithm, we conducted a comprehensive study on a total of 1440 data from 37 clinical patients using comparative algorithms (such as Cycle-GAN and Pix2pix). Perceptual analysis and quantitative evaluations illustrate that the invertible network for PET AC outperforms other existing AC models, which demonstrates the potential of the proposed method and the feasibility of achieving brain PET AC without CT.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
396,622
1805.08587
Deep Feature Aggregation and Image Re-ranking with Heat Diffusion for Image Retrieval
Image retrieval based on deep convolutional features has demonstrated state-of-the-art performance in popular benchmarks. In this paper, we present a unified solution to address deep convolutional feature aggregation and image re-ranking by simulating the dynamics of heat diffusion. A distinctive problem in image retrieval is that repetitive or \emph{bursty} features tend to dominate final image representations, resulting in representations less distinguishable. We show that by considering each deep feature as a heat source, our unsupervised aggregation method is able to avoid over-representation of \emph{bursty} features. We additionally provide a practical solution for the proposed aggregation method and further show the efficiency of our method in experimental evaluation. Inspired by the aforementioned deep feature aggregation method, we also propose a method to re-rank a number of top ranked images for a given query image by considering the query as the heat source. Finally, we extensively evaluate the proposed approach with pre-trained and fine-tuned deep networks on common public benchmarks and show superior performance compared to previous work.
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
98,180
2211.07855
Relationship of the language distance to English ability of a country
Language difference is one of the factors that hinder the acquisition of second language skills. In this article, we introduce a novel solution that leverages the strength of deep neural networks to measure the semantic dissimilarity between languages based on their word distributions in the embedding space of the multilingual pre-trained language model (e.g.,BERT). Then, we empirically examine the effectiveness of the proposed semantic language distance (SLD) in explaining the consistent variation in English ability of countries, which is proxied by their performance in the Internet-Based Test of English as Foreign Language (TOEFL iBT). The experimental results show that the language distance demonstrates negative influence on a country's average English ability. Interestingly, the effect is more significant on speaking and writing subskills, which pertain to the productive aspects of language learning. Besides, we provide specific recommendations for future research directions.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
330,393
2009.06625
Revealing Secrets in SPARQL Session Level
Based on Semantic Web technologies, knowledge graphs help users to discover information of interest by using live SPARQL services. Answer-seekers often examine intermediate results iteratively and modify SPARQL queries repeatedly in a search session. In this context, understanding user behaviors is critical for effective intention prediction and query optimization. However, these behaviors have not yet been researched systematically at the SPARQL session level. This paper reveals secrets of session-level user search behaviors by conducting a comprehensive investigation over massive real-world SPARQL query logs. In particular, we thoroughly assess query changes made by users w.r.t. structural and data-driven features of SPARQL queries. To illustrate the potentiality of our findings, we employ an application example of how to use our findings, which might be valuable to devise efficient SPARQL caching, auto-completion, query suggestion, approximation, and relaxation techniques in the future.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
195,702
1704.03844
Determining Song Similarity via Machine Learning Techniques and Tagging Information
The task of determining item similarity is a crucial one in a recommender system. This constitutes the base upon which the recommender system will work to determine which items are more likely to be enjoyed by a user, resulting in more user engagement. In this paper we tackle the problem of determining song similarity based solely on song metadata (such as the performer, and song title) and on tags contributed by users. We evaluate our approach under a series of different machine learning algorithms. We conclude that tf-idf achieves better results than Word2Vec to model the dataset to feature vectors. We also conclude that k-NN models have better performance than SVMs and Linear Regression for this problem.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
71,699
2303.03327
Tight Bounds for $\gamma$-Regret via the Decision-Estimation Coefficient
In this work, we give a statistical characterization of the $\gamma$-regret for arbitrary structured bandit problems, the regret which arises when comparing against a benchmark that is $\gamma$ times the optimal solution. The $\gamma$-regret emerges in structured bandit problems over a function class $\mathcal{F}$ where finding an exact optimum of $f \in \mathcal{F}$ is intractable. Our characterization is given in terms of the $\gamma$-DEC, a statistical complexity parameter for the class $\mathcal{F}$, which is a modification of the constrained Decision-Estimation Coefficient (DEC) of Foster et al., 2023 (and closely related to the original offset DEC of Foster et al., 2021). Our lower bound shows that the $\gamma$-DEC is a fundamental limit for any model class $\mathcal{F}$: for any algorithm, there exists some $f \in \mathcal{F}$ for which the $\gamma$-regret of that algorithm scales (nearly) with the $\gamma$-DEC of $\mathcal{F}$. We provide an upper bound showing that there exists an algorithm attaining a nearly matching $\gamma$-regret. Due to significant challenges in applying the prior results on the DEC to the $\gamma$-regret case, both our lower and upper bounds require novel techniques and a new algorithm.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
349,685
2005.06667
Exploiting Multi-Layer Grid Maps for Surround-View Semantic Segmentation of Sparse LiDAR Data
In this paper, we consider the transformation of laser range measurements into a top-view grid map representation to approach the task of LiDAR-only semantic segmentation. Since the recent publication of the SemanticKITTI data set, researchers are now able to study semantic segmentation of urban LiDAR sequences based on a reasonable amount of data. While other approaches propose to directly learn on the 3D point clouds, we are exploiting a grid map framework to extract relevant information and represent them by using multi-layer grid maps. This representation allows us to use well-studied deep learning architectures from the image domain to predict a dense semantic grid map using only the sparse input data of a single LiDAR scan. We compare single-layer and multi-layer approaches and demonstrate the benefit of a multi-layer grid map input. Since the grid map representation allows us to predict a dense, 360{\deg} semantic environment representation, we further develop a method to combine the semantic information from multiple scans and create dense ground truth grids. This method allows us to evaluate and compare the performance of our models not only based on grid cells with a detection, but on the full visible measurement range.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
177,076
2405.11995
Safe by Design Autonomous Driving Systems
Developing safe autonomous driving systems is a major scientific and technical challenge. Existing AI-based end-to-end solutions do not offer the necessary safety guarantees, while traditional systems engineering approaches are defeated by the complexity of the problem. Currently, there is an increasing interest in hybrid design solutions, integrating machine learning components, when necessary, while using model-based components for goal management and planning. We study a method for building safe by design autonomous driving systems, based on the assumption that the capability to drive boils down to the coordinated execution of a given set of driving operations. The assumption is substantiated by a compositionality result considering that autopilots are dynamic systems receiving a small number of types of vistas as input, each vista defining a free space in its neighborhood. It is shown that safe driving for each type of vista in the corresponding free space, implies safe driving for any possible scenario under some easy-to-check conditions concerning the transition between vistas. The designed autopilot comprises distinct control policies one per type of vista, articulated in two consecutive phases. The first phase consists of carefully managing a potentially risky situation by virtually reducing speed, while the second phase consists of exiting the situation by accelerating. The autopilots designed use for their predictions simple functions characterizing the acceleration and deceleration capabilities of the vehicles. They cover the main driving operations, including entering a main road, overtaking, crossing intersections protected by traffic lights or signals, and driving on freeways. The results presented reinforce the case for hybrid solutions that incorporate mathematically elegant and robust decision methods that are safe by design.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
455,375
2302.03916
QS-ADN: Quasi-Supervised Artifact Disentanglement Network for Low-Dose CT Image Denoising by Local Similarity Among Unpaired Data
Deep learning has been successfully applied to low-dose CT (LDCT) image denoising for reducing potential radiation risk. However, the widely reported supervised LDCT denoising networks require a training set of paired images, which is expensive to obtain and cannot be perfectly simulated. Unsupervised learning utilizes unpaired data and is highly desirable for LDCT denoising. As an example, an artifact disentanglement network (ADN) relies on unparied images and obviates the need for supervision but the results of artifact reduction are not as good as those through supervised learning.An important observation is that there is often hidden similarity among unpaired data that can be utilized. This paper introduces a new learning mode, called quasi-supervised learning, to empower the ADN for LDCT image denoising.For every LDCT image, the best matched image is first found from an unpaired normal-dose CT (NDCT) dataset. Then, the matched pairs and the corresponding matching degree as prior information are used to construct and train our ADN-type network for LDCT denoising.The proposed method is different from (but compatible with) supervised and semi-supervised learning modes and can be easily implemented by modifying existing networks. The experimental results show that the method is competitive with state-of-the-art methods in terms of noise suppression and contextual fidelity. The code and working dataset are publicly available at https://github.com/ruanyuhui/ADN-QSDL.git.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
344,518
2305.06967
Data quality dimensions for fair AI
Artificial Intelligence (AI) systems are not intrinsically neutral and biases trickle in any type of technological tool. In particular when dealing with people, the impact of AI algorithms' technical errors originating with mislabeled data is undeniable. As they feed wrong and discriminatory classifications, these systems are not systematically guarded against bias. In this article we consider the problem of bias in AI systems from the point of view of data quality dimensions. We highlight the limited model construction of bias mitigation tools based on accuracy strategy, illustrating potential improvements of a specific tool in gender classification errors occurring in two typically difficult contexts: the classification of non-binary individuals, for which the label set becomes incomplete with respect to the dataset; and the classification of transgender individuals, for which the dataset becomes inconsistent with respect to the label set. Using formal methods for reasoning about the behavior of the classification system in presence of a changing world, we propose to reconsider the fairness of the classification task in terms of completeness, consistency, timeliness and reliability, and offer some theoretical results.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
363,722
2405.16954
Convergence of SGD with momentum in the nonconvex case: A time window-based analysis
The stochastic gradient descent method with momentum (SGDM) is a common approach for solving large-scale and stochastic optimization problems. Despite its popularity, the convergence behavior of SGDM remains less understood in nonconvex scenarios. This is primarily due to the absence of a sufficient descent property and challenges in simultaneously controlling the momentum and stochastic errors in an almost sure sense. To address these challenges, we investigate the behavior of SGDM over specific time windows, rather than examining the descent of consecutive iterates as in traditional studies. This time window-based approach simplifies the convergence analysis and enables us to establish the iterate convergence result for SGDM under the {\L}ojasiewicz property. We further provide local convergence rates which depend on the underlying {\L}ojasiewicz exponent and the utilized step size schemes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
457,695
2011.00459
A multiscale model of terrain dynamics for real-time earthmoving simulation
A multiscale model for real-time simulation of terrain dynamics is explored. To represent the dynamics on different scales the model combines the description of soil as a continuous solid, as distinct particles and as rigid multibodies. The models are dynamically coupled to each other and to the earthmoving equipment. Agitated soil is represented by a hybrid of contacting particles and continuum solid, with the moving equipment and resting soil as geometric boundaries. Each zone of active soil is aggregated into distinct bodies, with the proper mass, momentum and frictional-cohesive properties, which constrain the equipment's multibody dynamics. The particle model parameters are pre-calibrated to the bulk mechanical parameters for a wide range of different soils. The result is a computationally efficient model for earthmoving operations that resolve the motion of the soil, using a fast iterative solver, and provide realistic forces and dynamic for the equipment, using a direct solver for high numerical precision. Numerical simulations of excavation and bulldozing operations are performed to validate the model and measure the computational performance. Reference data is produced using coupled discrete element and multibody dynamics simulations at relatively high resolution. The digging resistance and soil displacements with the real-time multiscale model agree with the reference model up to 10-25%, and run more than three orders of magnitude faster.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
204,229
2211.06721
Using Features at Multiple Temporal and Spatial Resolutions to Predict Human Behavior in Real Time
When performing complex tasks, humans naturally reason at multiple temporal and spatial resolutions simultaneously. We contend that for an artificially intelligent agent to effectively model human teammates, i.e., demonstrate computational theory of mind (ToM), it should do the same. In this paper, we present an approach for integrating high and low-resolution spatial and temporal information to predict human behavior in real time and evaluate it on data collected from human subjects performing simulated urban search and rescue (USAR) missions in a Minecraft-based environment. Our model composes neural networks for high and low-resolution feature extraction with a neural network for behavior prediction, with all three networks trained simultaneously. The high-resolution extractor encodes dynamically changing goals robustly by taking as input the Manhattan distance difference between the humans' Minecraft avatars and candidate goals in the environment for the latest few actions, computed from a high-resolution gridworld representation. In contrast, the low-resolution extractor encodes participants' historical behavior using a historical state matrix computed from a low-resolution graph representation. Through supervised learning, our model acquires a robust prior for human behavior prediction, and can effectively deal with long-term observations. Our experimental results demonstrate that our method significantly improves prediction accuracy compared to approaches that only use high-resolution information.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
329,998
1903.06023
Deep Distribution Regression
Due to their flexibility and predictive performance, machine-learning based regression methods have become an important tool for predictive modeling and forecasting. However, most methods focus on estimating the conditional mean or specific quantiles of the target quantity and do not provide the full conditional distribution, which contains uncertainty information that might be crucial for decision making. In this article, we provide a general solution by transforming a conditional distribution estimation problem into a constrained multi-class classification problem, in which tools such as deep neural networks. We propose a novel joint binary cross-entropy loss function to accomplish this goal. We demonstrate its performance in various simulation studies comparing to state-of-the-art competing methods. Additionally, our method shows improved accuracy in a probabilistic solar energy forecasting problem.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
124,294
2010.10217
Quantum circuit architecture search for variational quantum algorithms
Variational quantum algorithms (VQAs) are expected to be a path to quantum advantages on noisy intermediate-scale quantum devices. However, both empirical and theoretical results exhibit that the deployed ansatz heavily affects the performance of VQAs such that an ansatz with a larger number of quantum gates enables a stronger expressivity, while the accumulated noise may render a poor trainability. To maximally improve the robustness and trainability of VQAs, here we devise a resource and runtime efficient scheme termed quantum architecture search (QAS). In particular, given a learning task, QAS automatically seeks a near-optimal ansatz (i.e., circuit architecture) to balance benefits and side-effects brought by adding more noisy quantum gates to achieve a good performance. We implement QAS on both the numerical simulator and real quantum hardware, via the IBM cloud, to accomplish data classification and quantum chemistry tasks. In the problems studied, numerical and experimental results show that QAS can not only alleviate the influence of quantum noise and barren plateaus, but also outperforms VQAs with pre-selected ansatze.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
201,822
2411.19941
Perception Test 2024: Challenge Summary and a Novel Hour-Long VideoQA Benchmark
Following the successful 2023 edition, we organised the Second Perception Test challenge as a half-day workshop alongside the IEEE/CVF European Conference on Computer Vision (ECCV) 2024, with the goal of benchmarking state-of-the-art video models and measuring the progress since last year using the Perception Test benchmark. This year, the challenge had seven tracks (up from six last year) and covered low-level and high-level tasks, with language and non-language interfaces, across video, audio, and text modalities; the additional track covered hour-long video understanding and introduced a novel video QA benchmark 1h-walk VQA. Overall, the tasks in the different tracks were: object tracking, point tracking, temporal action localisation, temporal sound localisation, multiple-choice video question-answering, grounded video question-answering, and hour-long video question-answering. We summarise in this report the challenge tasks and results, and introduce in detail the novel hour-long video QA benchmark 1h-walk VQA.
false
false
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
512,436
2407.19439
Business and Regulatory Responses to Artificial Intelligence: Dynamic Regulation, Innovation Ecosystems and the Strategic Management of Disruptive Technology
Identifying and then implementing an effective response to disruptive new AI technologies is enormously challenging for any business looking to integrate AI into their operations, as well as regulators looking to leverage AI-related innovation as a mechanism for achieving regional economic growth. These business and regulatory challenges are particularly significant given the broad reach of AI, as well as the multiple uncertainties surrounding such technologies and their future development and effects. This article identifies two promising strategies for meeting the AI challenge, focusing on the example of Fintech. First, dynamic regulation, in the form of regulatory sandboxes and other regulatory approaches that aim to provide a space for responsible AI-related innovation. An empirical study provides preliminary evidence to suggest that jurisdictions that adopt a more proactive approach to Fintech regulation can attract greater investment. The second strategy relates to so-called innovation ecosystems. It is argued that such ecosystems are most effective when they afford opportunities for creative partnerships between well-established corporations and AI-focused startups and that this aspect of a successful innovation ecosystem is often overlooked in the existing discussion. The article suggests that these two strategies are interconnected, in that greater investment is an important element in both fostering and signaling a well-functioning innovation ecosystem and that a well-functioning ecosystem will, in turn, attract more funding. The resulting synergies between these strategies can, therefore, provide a jurisdiction with a competitive edge in becoming a regional hub for AI-related activity.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
476,788
1905.00111
Universal Mutual Information Privacy Guarantees for Smart Meters
Smart meters enable improvements in electricity distribution system efficiency at some cost in customer privacy. Users with home batteries can mitigate this privacy loss by applying charging policies that mask their underlying energy use. A battery charging policy is proposed and shown to provide universal privacy guarantees subject to a constraint on energy cost. The guarantee bounds our strategy's maximal information leakage from the user to the utility provider under general stochastic models of user energy consumption. The policy construction adapts coding strategies for non-probabilistic permuting channels to this privacy problem.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
129,397
2109.06363
Sensor Adversarial Traits: Analyzing Robustness of 3D Object Detection Sensor Fusion Models
A critical aspect of autonomous vehicles (AVs) is the object detection stage, which is increasingly being performed with sensor fusion models: multimodal 3D object detection models which utilize both 2D RGB image data and 3D data from a LIDAR sensor as inputs. In this work, we perform the first study to analyze the robustness of a high-performance, open source sensor fusion model architecture towards adversarial attacks and challenge the popular belief that the use of additional sensors automatically mitigate the risk of adversarial attacks. We find that despite the use of a LIDAR sensor, the model is vulnerable to our purposefully crafted image-based adversarial attacks including disappearance, universal patch, and spoofing. After identifying the underlying reason, we explore some potential defenses and provide some recommendations for improved sensor fusion models.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
255,121
2305.19005
Hybrid Driven Learning for Channel Estimation in Intelligent Reflecting Surface Aided Millimeter Wave Communications
Intelligent reflecting surfaces (IRS) have been proposed in millimeter wave (mmWave) and terahertz (THz) systems to achieve both coverage and capacity enhancement, where the design of hybrid precoders, combiners, and the IRS typically relies on channel state information. In this paper, we address the problem of uplink wideband channel estimation for IRS aided multiuser multiple-input single-output (MISO) systems with hybrid architectures. Combining the structure of model driven and data driven deep learning approaches, a hybrid driven learning architecture is devised for joint estimation and learning the properties of the channels. For a passive IRS aided system, we propose a residual learned approximate message passing as a model driven network. A denoising and attention network in the data driven network is used to jointly learn spatial and frequency features. Furthermore, we design a flexible hybrid driven network in a hybrid passive and active IRS aided system. Specifically, the depthwise separable convolution is applied to the data driven network, leading to less network complexity and fewer parameters at the IRS side. Numerical results indicate that in both systems, the proposed hybrid driven channel estimation methods significantly outperform existing deep learning-based schemes and effectively reduce the pilot overhead by about 60% in IRS aided systems.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
369,338
1802.08701
Machine learning based hyperspectral image analysis: A survey
Hyperspectral sensors enable the study of the chemical properties of scene materials remotely for the purpose of identification, detection, and chemical composition analysis of objects in the environment. Hence, hyperspectral images captured from earth observing satellites and aircraft have been increasingly important in agriculture, environmental monitoring, urban planning, mining, and defense. Machine learning algorithms due to their outstanding predictive power have become a key tool for modern hyperspectral image analysis. Therefore, a solid understanding of machine learning techniques have become essential for remote sensing researchers and practitioners. This paper reviews and compares recent machine learning-based hyperspectral image analysis methods published in literature. We organize the methods by the image analysis task and by the type of machine learning algorithm, and present a two-way mapping between the image analysis tasks and the types of machine learning algorithms that can be applied to them. The paper is comprehensive in coverage of both hyperspectral image analysis tasks and machine learning algorithms. The image analysis tasks considered are land cover classification, target detection, unmixing, and physical parameter estimation. The machine learning algorithms covered are Gaussian models, linear regression, logistic regression, support vector machines, Gaussian mixture model, latent linear models, sparse linear models, Gaussian mixture models, ensemble learning, directed graphical models, undirected graphical models, clustering, Gaussian processes, Dirichlet processes, and deep learning. We also discuss the open challenges in the field of hyperspectral image analysis and explore possible future directions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
91,158
2008.11227
A Computationally Efficient Multiclass Time-Frequency Common Spatial Pattern Analysis on EEG Motor Imagery
Common spatial pattern (CSP) is a popular feature extraction method for electroencephalogram (EEG) motor imagery (MI). This study modifies the conventional CSP algorithm to improve the multi-class MI classification accuracy and ensure the computation process is efficient. The EEG MI data is gathered from the Brain-Computer Interface (BCI) Competition IV. At first, a bandpass filter and a time-frequency analysis are performed for each experiment trial. Then, the optimal EEG signals for every experiment trials are selected based on the signal energy for CSP feature extraction. In the end, the extracted features are classified by three classifiers, linear discriminant analysis (LDA), na\"ive Bayes (NVB), and support vector machine (SVM), in parallel for classification accuracy comparison. The experiment results show the proposed algorithm average computation time is 37.22% less than the FBCSP (1st winner in the BCI Competition IV) and 4.98% longer than the conventional CSP method. For the classification rate, the proposed algorithm kappa value achieved 2nd highest compared with the top 3 winners in BCI Competition IV.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
193,206
2310.10497
LocSelect: Target Speaker Localization with an Auditory Selective Hearing Mechanism
The prevailing noise-resistant and reverberation-resistant localization algorithms primarily emphasize separating and providing directional output for each speaker in multi-speaker scenarios, without association with the identity of speakers. In this paper, we present a target speaker localization algorithm with a selective hearing mechanism. Given a reference speech of the target speaker, we first produce a speaker-dependent spectrogram mask to eliminate interfering speakers' speech. Subsequently, a Long short-term memory (LSTM) network is employed to extract the target speaker's location from the filtered spectrogram. Experiments validate the superiority of our proposed method over the existing algorithms for different scale invariant signal-to-noise ratios (SNR) conditions. Specifically, at SNR = -10 dB, our proposed network LocSelect achieves a mean absolute error (MAE) of 3.55 and an accuracy (ACC) of 87.40%.
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
400,249
1002.4048
A Hough Transform based Technique for Text Segmentation
Text segmentation is an inherent part of an OCR system irrespective of the domain of application of it. The OCR system contains a segmentation module where the text lines, words and ultimately the characters must be segmented properly for its successful recognition. The present work implements a Hough transform based technique for line and word segmentation from digitized images. The proposed technique is applied not only on the document image dataset but also on dataset for business card reader system and license plate recognition system. For standardization of the performance of the system the technique is also applied on public domain dataset published in the website by CMATER, Jadavpur University. The document images consist of multi-script printed and hand written text lines with variety in script and line spacing in single document image. The technique performs quite satisfactorily when applied on mobile camera captured business card images with low resolution. The usefulness of the technique is verified by applying it in a commercial project for localization of license plate of vehicles from surveillance camera images by the process of segmentation itself. The accuracy of the technique for word segmentation, as verified experimentally, is 85.7% for document images, 94.6% for business card images and 88% for surveillance camera images.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
5,753
2410.03432
EB-NeRD: A Large-Scale Dataset for News Recommendation
Personalized content recommendations have been pivotal to the content experience in digital media from video streaming to social networks. However, several domain specific challenges have held back adoption of recommender systems in news publishing. To address these challenges, we introduce the Ekstra Bladet News Recommendation Dataset (EB-NeRD). The dataset encompasses data from over a million unique users and more than 37 million impression logs from Ekstra Bladet. It also includes a collection of over 125,000 Danish news articles, complete with titles, abstracts, bodies, and metadata, such as categories. EB-NeRD served as the benchmark dataset for the RecSys '24 Challenge, where it was demonstrated how the dataset can be used to address both technical and normative challenges in designing effective and responsible recommender systems for news publishing. The dataset is available at: https://recsys.eb.dk.
false
false
false
false
true
true
true
false
false
false
false
false
false
false
false
false
false
false
494,778
1506.00406
Monolingually Derived Phrase Scores for Phrase Based SMT Using Neural Networks Vector Representations
In this paper, we propose two new features for estimating phrase-based machine translation parameters from mainly monolingual data. Our method is based on two recently introduced neural network vector representation models for words and sentences. It is the first time that these models have been used in an end to end phrase-based machine translation system. Scores obtained from our method can recover more than 80% of BLEU loss caused by removing phrase table probabilities. We also show that our features combined with the phrase table probabilities improve the BLEU score by absolute 0.74 points.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
43,665
2502.04400
Adaptive Prototype Knowledge Transfer for Federated Learning with Mixed Modalities and Heterogeneous Tasks
Multimodal Federated Learning (MFL) enables multiple clients to collaboratively train models on multimodal data while ensuring clients' privacy. However, modality and task heterogeneity hinder clients from learning a unified representation, weakening local model generalization, especially in MFL with mixed modalities where only some clients have multimodal data. In this work, we propose an Adaptive prototype-based Multimodal Federated Learning (AproMFL) framework for mixed modalities and heterogeneous tasks to address the aforementioned issues. Our AproMFL transfers knowledge through adaptively-constructed prototypes without a prior public dataset. Clients adaptively select prototype construction methods in line with tasks; server converts client prototypes into unified multimodal prototypes and aggregates them to form global prototypes, avoid clients keeping unified labels. We divide the model into various modules and only aggregate mapping modules to reduce communication and computation overhead. To address aggregation issues in heterogeneity, we develop a client relationship graph-based scheme to dynamically adjust aggregation weights. Extensive experiments on representative datasets evidence effectiveness of AproMFL.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
true
531,139
1907.07061
How much real data do we actually need: Analyzing object detection performance using synthetic and real data
In recent years, deep learning models have resulted in a huge amount of progress in various areas, including computer vision. By nature, the supervised training of deep models requires a large amount of data to be available. This ideal case is usually not tractable as the data annotation is a tremendously exhausting and costly task to perform. An alternative is to use synthetic data. In this paper, we take a comprehensive look into the effects of replacing real data with synthetic data. We further analyze the effects of having a limited amount of real data. We use multiple synthetic and real datasets along with a simulation tool to create large amounts of cheaply annotated synthetic data. We analyze the domain similarity of each of these datasets. We provide insights about designing a methodological procedure for training deep networks using these datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
138,774
2111.05534
Verifying Controllers with Convolutional Neural Network-based Perception: A Case for Intelligible, Safe, and Precise Abstractions
Convolutional Neural Networks (CNN) for object detection, lane detection, and segmentation now sit at the head of most autonomy pipelines, and yet, their safety analysis remains an important challenge. Formal analysis of perception models is fundamentally difficult because their correctness is hard if not impossible to specify. We present a technique for inferring intelligible and safe abstractions for perception models from system-level safety requirements, data, and program analysis of the modules that are downstream from perception. The technique can help tradeoff safety, size, and precision, in creating abstractions and the subsequent verification. We apply the method to two significant case studies based on high-fidelity simulations (a) a vision-based lane keeping controller for an autonomous vehicle and (b) a controller for an agricultural robot. We show how the generated abstractions can be composed with the downstream modules and then the resulting abstract system can be verified using program analysis tools like CBMC. Detailed evaluations of the impacts of size, safety requirements, and the environmental parameters (e.g., lighting, road surface, plant type) on the precision of the generated abstractions suggest that the approach can help guide the search for corner cases and safe operating envelops.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
265,825
2401.02873
Optimal Chaining of Vehicle Plans with Time Windows
For solving problems from the domain of Mobility-on-Demand (MoD), we often need to connect vehicle plans into plans spanning longer time, a process we call plan chaining. As we show in this work, chaining of the plans can be used to reduce the size of MoD providers' fleet (fleet-sizing problem) but also to reduce the total driven distance by providing high-quality vehicle dispatching solutions in MoD systems. Recently, a solution that uses this principle has been proposed to solve the fleet-sizing problem. The method does not consider the time flexibility of the plans. Instead, plans are fixed in time and cannot be delayed. However, time flexibility is an essential property of all vehicle problems with time windows. This work presents a new plan chaining formulation that considers delays as allowed by the time windows and a solution method for solving it. Moreover, we prove that the proposed plan chaining method is optimal, and we analyze its complexity. Finally, we list some practical applications and perform a demonstration for one of them: a new heuristic vehicle dispatching method for solving the static dial-a-ride problem. The demonstration results show that our proposed method provides a better solution than the two heuristic baselines for the majority of instances that cannot be solved optimally. At the same time, our method does not have the largest computational time requirements compared to the baselines. Therefore, we conclude that the proposed optimal chaining method provides not only theoretically sound results but is also practically applicable.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
419,873
2012.03642
Generalised Perceptron Learning
We present a generalisation of Rosenblatt's traditional perceptron learning algorithm to the class of proximal activation functions and demonstrate how this generalisation can be interpreted as an incremental gradient method applied to a novel energy function. This novel energy function is based on a generalised Bregman distance, for which the gradient with respect to the weights and biases does not require the differentiation of the activation function. The interpretation as an energy minimisation algorithm paves the way for many new algorithms, of which we explore a novel variant of the iterative soft-thresholding algorithm for the learning of sparse perceptrons.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
210,191
2411.15183
Balancing property optimization and constraint satisfaction for constrained multi-property molecular optimization
Molecular optimization, which aims to discover improved molecules from a vast chemical search space, is a critical step in chemical development. Various artificial intelligence technologies have demonstrated high effectiveness and efficiency on molecular optimization tasks. However, few of these technologies focus on balancing property optimization with constraint satisfaction, making it difficult to obtain high-quality molecules that not only possess desirable properties but also meet various constraints. To address this issue, we propose a constrained multi-property molecular optimization framework (CMOMO), which is a flexible and efficient method to simultaneously optimize multiple molecular properties while satisfying several drug-like constraints. CMOMO improves multiple properties of molecules with constraints based on dynamic cooperative optimization, which dynamically handles the constraints across various scenarios. Besides, CMOMO evaluates multiple properties within discrete chemical spaces cooperatively with the evolution of molecules within an implicit molecular space to guide the evolutionary search. Experimental results show the superior performance of the proposed CMOMO over five state-of-the-art molecular optimization methods on two benchmark tasks of simultaneously optimizing multiple non-biological activity properties while satisfying two structural constraints. Furthermore, the practical applicability of CMOMO is verified on two practical tasks, where it identified a collection of candidate ligands of $\beta$2-adrenoceptor GPCR and candidate inhibitors of glycogen synthase kinase-3$\beta$ with high properties and under drug-like constraints.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
510,474
1512.00567
Rethinking the Inception Architecture for Computer Vision
Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5% top-5 error on the validation set (3.6% error on the test set) and 17.3% top-1 error on the validation set.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
49,723
2403.04866
A Modular End-to-End Multimodal Learning Method for Structured and Unstructured Data
Multimodal learning is a rapidly growing research field that has revolutionized multitasking and generative modeling in AI. While much of the research has focused on dealing with unstructured data (e.g., language, images, audio, or video), structured data (e.g., tabular data, time series, or signals) has received less attention. However, many industry-relevant use cases involve or can be benefited from both types of data. In this work, we propose a modular, end-to-end multimodal learning method called MAGNUM, which can natively handle both structured and unstructured data. MAGNUM is flexible enough to employ any specialized unimodal module to extract, compress, and fuse information from all available modalities.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
435,766
2405.04952
Evolving R2 to R2+: Optimal, Delayed Line-of-sight Vector-based Path Planning
A vector-based any-angle path planner, R2, is evolved in to R2+ in this paper. By delaying line-of-sight, R2 and R2+ search times are largely unaffected by the distance between the start and goal points, but are exponential in the worst case with respect to the number of collisions during searches. To improve search times, additional discarding conditions in the overlap rule are introduced in R2+. In addition, R2+ resolves interminable chases in R2 by replacing ad hoc points with limited occupied-sector traces from target nodes, and simplifies R2 by employing new abstract structures and ensuring target progression during a trace. R2+ preserves the speed of R2 when paths are expected to detour around few obstacles, and searches significantly faster than R2 in maps with many disjoint obstacles.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
452,741
2410.14135
Inverse Reinforcement Learning from Non-Stationary Learning Agents
In this paper, we study an inverse reinforcement learning problem that involves learning the reward function of a learning agent using trajectory data collected while this agent is learning its optimal policy. To address this problem, we propose an inverse reinforcement learning method that allows us to estimate the policy parameters of the learning agent which can then be used to estimate its reward function. Our method relies on a new variant of the behavior cloning algorithm, which we call bundle behavior cloning, and uses a small number of trajectories generated by the learning agent's policy at different points in time to learn a set of policies that match the distribution of actions observed in the sampled trajectories. We then use the cloned policies to train a neural network model that estimates the reward function of the learning agent. We provide a theoretical analysis to show a complexity result on bound guarantees for our method that beats standard behavior cloning as well as numerical experiments for a reinforcement learning problem that validate the proposed method.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
499,882
2412.02899
Adaptive LiDAR Odometry and Mapping for Autonomous Agricultural Mobile Robots in Unmanned Farms
Unmanned and intelligent agricultural systems are crucial for enhancing agricultural efficiency and for helping mitigate the effect of labor shortage. However, unlike urban environments, agricultural fields impose distinct and unique challenges on autonomous robotic systems, such as the unstructured and dynamic nature of the environment, the rough and uneven terrain, and the resulting non-smooth robot motion. To address these challenges, this work introduces an adaptive LiDAR odometry and mapping framework tailored for autonomous agricultural mobile robots operating in complex agricultural environments. The proposed framework consists of a robust LiDAR odometry algorithm based on dense Generalized-ICP scan matching, and an adaptive mapping module that considers motion stability and point cloud consistency for selective map updates. The key design principle of this framework is to prioritize the incremental consistency of the map by rejecting motion-distorted points and sparse dynamic objects, which in turn leads to high accuracy in odometry estimated from scan matching against the map. The effectiveness of the proposed method is validated via extensive evaluation against state-of-the-art methods on field datasets collected in real-world agricultural environments featuring various planting types, terrain types, and robot motion profiles. Results demonstrate that our method can achieve accurate odometry estimation and mapping results consistently and robustly across diverse agricultural settings, whereas other methods are sensitive to abrupt robot motion and accumulated drift in unstructured environments. Further, the computational efficiency of our method is competitive compared with other methods. The source code of the developed method and the associated field dataset are publicly available at https://github.com/UCR-Robotics/AG-LOAM.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
513,730
1502.05082
What makes for effective detection proposals?
Current top performing object detectors employ detection proposals to guide the search for objects, thereby avoiding exhaustive sliding window search across images. Despite the popularity and widespread use of detection proposals, it is unclear which trade-offs are made when using them during object detection. We provide an in-depth analysis of twelve proposal methods along with four baselines regarding proposal repeatability, ground truth annotation recall on PASCAL, ImageNet, and MS COCO, and their impact on DPM, R-CNN, and Fast R-CNN detection performance. Our analysis shows that for object detection improving proposal localisation accuracy is as important as improving recall. We introduce a novel metric, the average recall (AR), which rewards both high recall and good localisation and correlates surprisingly well with detection performance. Our findings show common strengths and weaknesses of existing methods, and provide insights and metrics for selecting and tuning proposal methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
40,333
2302.09780
Compressing Tabular Data via Latent Variable Estimation
Data used for analytics and machine learning often take the form of tables with categorical entries. We introduce a family of lossless compression algorithms for such data that proceed in four steps: $(i)$ Estimate latent variables associated to rows and columns; $(ii)$ Partition the table in blocks according to the row/column latents; $(iii)$ Apply a sequential (e.g. Lempel-Ziv) coder to each of the blocks; $(iv)$ Append a compressed encoding of the latents. We evaluate it on several benchmark datasets, and study optimal compression in a probabilistic model for that tabular data, whereby latent values are independent and table entries are conditionally independent given the latent values. We prove that the model has a well defined entropy rate and satisfies an asymptotic equipartition property. We also prove that classical compression schemes such as Lempel-Ziv and finite-state encoders do not achieve this rate. On the other hand, the latent estimation strategy outlined above achieves the optimal rate.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
346,563
2305.06036
FusionDepth: Complement Self-Supervised Monocular Depth Estimation with Cost Volume
Multi-view stereo depth estimation based on cost volume usually works better than self-supervised monocular depth estimation except for moving objects and low-textured surfaces. So in this paper, we propose a multi-frame depth estimation framework which monocular depth can be refined continuously by multi-frame sequential constraints, leveraging a Bayesian fusion layer within several iterations. Both monocular and multi-view networks can be trained with no depth supervision. Our method also enhances the interpretability when combining monocular estimation with multi-view cost volume. Detailed experiments show that our method surpasses state-of-the-art unsupervised methods utilizing single or multiple frames at test time on KITTI benchmark.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
363,380
1802.03887
A Systems Theory Approach to the Synthesis of Minimum Noise Phase-Insensitive Quantum Amplifiers
We present a systems theory approach to the proof of a result bounding the required level of added quantum noise in a phase-insensitive quantum amplifier. We also present a synthesis procedure for constructing a quantum optical phase-insensitive quantum amplifier which adds the minimum level of quantum noise and achieves a required gain and bandwidth. This synthesis procedure is based on a singularly perturbed quantum system and leads to an amplifier involving two squeezers and two beamsplitters.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
90,091
2501.07818
A Multi-Encoder Frozen-Decoder Approach for Fine-Tuning Large Language Models
Among parameter-efficient fine-tuning methods, freezing has emerged as a popular strategy for speeding up training, reducing catastrophic forgetting, and improving downstream performance. We investigate the impact of freezing the decoder in a multi-task setup comprising diverse natural language tasks, aiming to reduce deployment overhead and enhance portability to novel tasks. Our experiments, conducted by fine-tuning both individual and multi-task setups on the AlexaTM model, reveal that freezing decoders is highly effective for tasks with natural language outputs and mitigates catastrophic forgetting in multilingual tasks. However, we find that pairing frozen decoders with a larger model can effectively maintain or even enhance performance in structured and QA tasks, making it a viable strategy for a broader range of task types.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
524,523
1909.03785
Belief Regulated Dual Propagation Nets for Learning Action Effects on Groups of Articulated Objects
In recent years, graph neural networks have been successfully applied for learning the dynamics of complex and partially observable physical systems. However, their use in the robotics domain is, to date, still limited. In this paper, we introduce Belief Regulated Dual Propagation Networks (BRDPN), a general-purpose learnable physics engine, which enables a robot to predict the effects of its actions in scenes containing groups of articulated multi-part objects. Specifically, our framework extends recently proposed propagation networks (PropNets) and consists of two complementary components, a physics predictor and a belief regulator. While the former predicts the future states of the object(s) manipulated by the robot, the latter constantly corrects the robot's knowledge regarding the objects and their relations. Our results showed that after training in a simulator, the robot can reliably predict the consequences of its actions in object trajectory level and exploit its own interaction experience to correct its belief about the state of the environment, enabling better predictions in partially observable environments. Furthermore, the trained model was transferred to the real world and verified in predicting trajectories of pushed interacting objects whose joint relations were initially unknown. We compared BRDPN against PropNets, and showed that BRDPN performs consistently well. Moreover, BRDPN can adapt its physic predictions, since the relations can be predicted online.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
144,601
2103.13427
Multi-Label Classification Neural Networks with Hard Logical Constraints
Multi-label classification (MC) is a standard machine learning problem in which a data point can be associated with a set of classes. A more challenging scenario is given by hierarchical multi-label classification (HMC) problems, in which every prediction must satisfy a given set of hard constraints expressing subclass relationships between classes. In this paper, we propose C-HMCNN(h), a novel approach for solving HMC problems, which, given a network h for the underlying MC problem, exploits the hierarchy information in order to produce predictions coherent with the constraints and to improve performance. Furthermore, we extend the logic used to express HMC constraints in order to be able to specify more complex relations among the classes and propose a new model CCN(h), which extends C-HMCNN(h) and is again able to satisfy and exploit the constraints to improve performance. We conduct an extensive experimental analysis showing the superior performance of both C-HMCNN(h) and CCN(h) when compared to state-of-the-art models in both the HMC and the general MC setting with hard logical constraints.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
226,478
1902.04335
Hyperbolic Disk Embeddings for Directed Acyclic Graphs
Obtaining continuous representations of structural data such as directed acyclic graphs (DAGs) has gained attention in machine learning and artificial intelligence. However, embedding complex DAGs in which both ancestors and descendants of nodes are exponentially increasing is difficult. Tackling in this problem, we develop Disk Embeddings, which is a framework for embedding DAGs into quasi-metric spaces. Existing state-of-the-art methods, Order Embeddings and Hyperbolic Entailment Cones, are instances of Disk Embedding in Euclidean space and spheres respectively. Furthermore, we propose a novel method Hyperbolic Disk Embeddings to handle exponential growth of relations. The results of our experiments show that our Disk Embedding models outperform existing methods especially in complex DAGs other than trees.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
121,318
2209.08097
Noise transfer for unsupervised domain adaptation of retinal OCT images
Optical coherence tomography (OCT) imaging from different camera devices causes challenging domain shifts and can cause a severe drop in accuracy for machine learning models. In this work, we introduce a minimal noise adaptation method based on a singular value decomposition (SVDNA) to overcome the domain gap between target domains from three different device manufacturers in retinal OCT imaging. Our method utilizes the difference in noise structure to successfully bridge the domain gap between different OCT devices and transfer the style from unlabeled target domain images to source images for which manual annotations are available. We demonstrate how this method, despite its simplicity, compares or even outperforms state-of-the-art unsupervised domain adaptation methods for semantic segmentation on a public OCT dataset. SVDNA can be integrated with just a few lines of code into the augmentation pipeline of any network which is in contrast to many state-of-the-art domain adaptation methods which often need to change the underlying model architecture or train a separate style transfer model. The full code implementation for SVDNA is available at https://github.com/ValentinKoch/SVDNA.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
317,992
2010.16409
Inherent Trade-offs in the Fair Allocation of Treatments
Explicit and implicit bias clouds human judgement, leading to discriminatory treatment of minority groups. A fundamental goal of algorithmic fairness is to avoid the pitfalls in human judgement by learning policies that improve the overall outcomes while providing fair treatment to protected classes. In this paper, we propose a causal framework that learns optimal intervention policies from data subject to fairness constraints. We define two measures of treatment bias and infer best treatment assignment that minimizes the bias while optimizing overall outcome. We demonstrate that there is a dilemma of balancing fairness and overall benefit; however, allowing preferential treatment to protected classes in certain circumstances (affirmative action) can dramatically improve the overall benefit while also preserving fairness. We apply our framework to data containing student outcomes on standardized tests and show how it can be used to design real-world policies that fairly improve student test scores. Our framework provides a principled way to learn fair treatment policies in real-world settings.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
false
204,057
2402.10009
Zero-Shot Unsupervised and Text-Based Audio Editing Using DDPM Inversion
Editing signals using large pre-trained models, in a zero-shot manner, has recently seen rapid advancements in the image domain. However, this wave has yet to reach the audio domain. In this paper, we explore two zero-shot editing techniques for audio signals, which use DDPM inversion with pre-trained diffusion models. The first, which we coin ZEro-shot Text-based Audio (ZETA) editing, is adopted from the image domain. The second, named ZEro-shot UnSupervized (ZEUS) editing, is a novel approach for discovering semantically meaningful editing directions without supervision. When applied to music signals, this method exposes a range of musically interesting modifications, from controlling the participation of specific instruments to improvisations on the melody. Samples and code can be found in https://hilamanor.github.io/AudioEditing/ .
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
429,766
2208.04820
Rapid Development of a Mobile Robot Simulation Environment
Robotics simulation provides many advantages during the development of an intelligent ground vehicle (IGV) such as testing the software components in varying scenarios without requiring a complete physical robot. This paper discusses a 3D simulation environment created using rapid application development and the Unity game engine to enable testing during a mobile robotics competition. Our experience shows that the simulation environment contributed greatly to the development of software for the competition. The simulator also contributed to the hardware development of the robot.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
312,233
2501.18277
Sebra: Debiasing Through Self-Guided Bias Ranking
Ranking samples by fine-grained estimates of spuriosity (the degree to which spurious cues are present) has recently been shown to significantly benefit bias mitigation, over the traditional binary biased-\textit{vs}-unbiased partitioning of train sets. However, this spuriosity ranking comes with the requirement of human supervision. In this paper, we propose a debiasing framework based on our novel \ul{Se}lf-Guided \ul{B}ias \ul{Ra}nking (\emph{Sebra}), that mitigates biases (spurious correlations) via an automatic ranking of data points by spuriosity within their respective classes. Sebra leverages a key local symmetry in Empirical Risk Minimization (ERM) training -- the ease of learning a sample via ERM inversely correlates with its spuriousity; the fewer spurious correlations a sample exhibits, the harder it is to learn, and vice versa. However, globally across iterations, ERM tends to deviate from this symmetry. Sebra dynamically steers ERM to correct this deviation, facilitating the sequential learning of attributes in increasing order of difficulty, \ie, decreasing order of spuriosity. As a result, the sequence in which Sebra learns samples naturally provides spuriousity rankings. We use the resulting fine-grained bias characterization in a contrastive learning framework to mitigate biases from multiple sources. Extensive experiments show that Sebra consistently outperforms previous state-of-the-art unsupervised debiasing techniques across multiple standard benchmarks, including UrbanCars, BAR, CelebA, and ImageNet-1K. Code, pre-trained models, and training logs are available at https://kadarsh22.github.io/sebra_iclr25/.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
528,638
1904.05236
Curriculum semi-supervised segmentation
This study investigates a curriculum-style strategy for semi-supervised CNN segmentation, which devises a regression network to learn image-level information such as the size of a target region. These regressions are used to effectively regularize the segmentation network, constraining softmax predictions of the unlabeled images to match the inferred label distributions. Our framework is based on inequality constraints that tolerate uncertainties with inferred knowledge, e.g., regressed region size, and can be employed for a large variety of region attributes. We evaluated our proposed strategy for left ventricle segmentation in magnetic resonance images (MRI), and compared it to standard proposal-based semi-supervision strategies. Our strategy leverages unlabeled data in more efficiently, and achieves very competitive results, approaching the performance of full-supervision.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
127,244
1907.01453
A Note On The Popularity of Stochastic Optimization Algorithms in Different Fields: A Quantitative Analysis from 2007 to 2017
Stochastic optimization algorithms are often used to solve complex large-scale optimization problems in various fields. To date, there have been a number of stochastic optimization algorithms such as Genetic Algorithm, Cuckoo Search, Tabu Search, Simulated Annealing, Particle Swarm Optimization, Ant Colony Optimization, etc. Each algorithm has some advantages and disadvantages. Currently, there is no study that can help researchers to choose the most popular optimization algorithm to deal with the problems in different research fields. In this note, a quantitative analysis of the popularity of 14 stochastic optimization algorithms in 18 different research fields in the last ten years from 2007 to 2017 is provided. This quantitative analysis can help researchers/practitioners select the best optimization algorithm to solve complex large-scale optimization problems in the fields of Engineering, Computer science, Operations research, Mathematics, Physics, Chemistry, Automation control systems, Materials science, Energy fuels, Mechanics, Telecommunications, Thermodynamics, Optics, Environmental sciences ecology, Water resources, Transportation, Construction building technology, and Robotics.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
137,334
2302.03768
Catch Me If You Can: Improving Adversaries in Cyber-Security With Q-Learning Algorithms
The ongoing rise in cyberattacks and the lack of skilled professionals in the cybersecurity domain to combat these attacks show the need for automated tools capable of detecting an attack with good performance. Attackers disguise their actions and launch attacks that consist of multiple actions, which are difficult to detect. Therefore, improving defensive tools requires their calibration against a well-trained attacker. In this work, we propose a model of an attacking agent and environment and evaluate its performance using basic Q-Learning, Naive Q-learning, and DoubleQ-Learning, all of which are variants of Q-Learning. The attacking agent is trained with the goal of exfiltrating data whereby all the hosts in the network have a non-zero detection probability. Results show that the DoubleQ-Learning agent has the best overall performance rate by successfully achieving the goal in $70\%$ of the interactions.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
344,456
2408.16119
Data Formulator 2: Iteratively Creating Rich Visualizations with AI
To create rich visualizations, data analysts often need to iterate back and forth among data processing and chart specification to achieve their goals. To achieve this, analysts need not only proficiency in data transformation and visualization tools but also efforts to manage the branching history consisting of many different versions of data and charts. Recent LLM-powered AI systems have greatly improved visualization authoring experiences, for example by mitigating manual data transformation barriers via LLMs' code generation ability. However, these systems do not work well for iterative visualization authoring, because they often require analysts to provide, in a single turn, a text-only prompt that fully describes the complex visualization task to be performed, which is unrealistic to both users and models in many cases. In this paper, we present Data Formulator 2, an LLM-powered visualization system to address these challenges. With Data Formulator 2, users describe their visualization intent with blended UI and natural language inputs, and data transformation are delegated to AI. To support iteration, Data Formulator 2 lets users navigate their iteration history and reuse previous designs towards new ones so that they don't need to start from scratch every time. In a user study with eight participants, we observed that Data Formulator 2 allows participants to develop their own iteration strategies to complete challenging data exploration sessions.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
484,196
2310.07197
MatChat: A Large Language Model and Application Service Platform for Materials Science
The prediction of chemical synthesis pathways plays a pivotal role in materials science research. Challenges, such as the complexity of synthesis pathways and the lack of comprehensive datasets, currently hinder our ability to predict these chemical processes accurately. However, recent advancements in generative artificial intelligence (GAI), including automated text generation and question-answering systems, coupled with fine-tuning techniques, have facilitated the deployment of large-scale AI models tailored to specific domains. In this study, we harness the power of the LLaMA2-7B model and enhance it through a learning process that incorporates 13,878 pieces of structured material knowledge data. This specialized AI model, named MatChat, focuses on predicting inorganic material synthesis pathways. MatChat exhibits remarkable proficiency in generating and reasoning with knowledge in materials science. Although MatChat requires further refinement to meet the diverse material design needs, this research undeniably highlights its impressive reasoning capabilities and innovative potential in the field of materials science. MatChat is now accessible online and open for use, with both the model and its application framework available as open source. This study establishes a robust foundation for collaborative innovation in the integration of generative AI in materials science.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
398,868
2405.02935
Enabling Patient-side Disease Prediction via the Integration of Patient Narratives
Disease prediction holds considerable significance in modern healthcare, because of its crucial role in facilitating early intervention and implementing effective prevention measures. However, most recent disease prediction approaches heavily rely on laboratory test outcomes (e.g., blood tests and medical imaging from X-rays). Gaining access to such data for precise disease prediction is often a complex task from the standpoint of a patient and is always only available post-patient consultation. To make disease prediction available from patient-side, we propose Personalized Medical Disease Prediction (PoMP), which predicts diseases using patient health narratives including textual descriptions and demographic information. By applying PoMP, patients can gain a clearer comprehension of their conditions, empowering them to directly seek appropriate medical specialists and thereby reducing the time spent navigating healthcare communication to locate suitable doctors. We conducted extensive experiments using real-world data from Haodf to showcase the effectiveness of PoMP.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
451,981
2306.09835
Subset Selection Based On Multiple Rankings in the Presence of Bias: Effectiveness of Fairness Constraints for Multiwinner Voting Score Functions
We consider the problem of subset selection where one is given multiple rankings of items and the goal is to select the highest ``quality'' subset. Score functions from the multiwinner voting literature have been used to aggregate rankings into quality scores for subsets. We study this setting of subset selection problems when, in addition, rankings may contain systemic or unconscious biases toward a group of items. For a general model of input rankings and biases, we show that requiring the selected subset to satisfy group fairness constraints can improve the quality of the selection with respect to unbiased rankings. Importantly, we show that for fairness constraints to be effective, different multiwinner score functions may require a drastically different number of rankings: While for some functions, fairness constraints need an exponential number of rankings to recover a close-to-optimal solution, for others, this dependency is only polynomial. This result relies on a novel notion of ``smoothness'' of submodular functions in this setting that quantifies how well a function can ``correctly'' assess the quality of items in the presence of bias. The results in this paper can be used to guide the choice of multiwinner score functions for the subset selection setting considered here; we additionally provide a tool to empirically enable this.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
true
373,984
1911.11876
Dataset-On-Demand: Automatic View Search and Presentation for Data Discovery
Many data problems are solved when the right view of a combination of datasets is identified. Finding such a view is challenging because of the many tables spread across many databases, data lakes, and cloud storage in modern organizations. Finding relevant tables, and identifying how to combine them is a difficult and time-consuming process that hampers users' productivity. In this paper, we describe Dataset-On-Demand (DoD), a system that lets users specify the schema of the view they want, and have the system find views for them. With many underlying data sources, the number of resulting views for any given query is high, and the burden of choosing the right one is onerous to users. DoD uses a new method, 4C, to reduce the size of the view choice space for users. 4C classifies views into 4 classes: compatible views are exactly the same, contained views present a subsumption relationship, complementary views are unionable, and contradictory views have incompatible values that indicate fundamental differences between views. These 4 classes permit different presentation strategies to reduce the total number of views a user must consider. We evaluate DoD on two key metrics of interest: its ability to reduce the size of the choice space, and the end to end performance. DoD finds all views within minutes, and reduces the number of views presented to users by 2-10x.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
155,243
2210.15543
Beyond the Return: Off-policy Function Estimation under User-specified Error-measuring Distributions
Off-policy evaluation often refers to two related tasks: estimating the expected return of a policy and estimating its value function (or other functions of interest, such as density ratios). While recent works on marginalized importance sampling (MIS) show that the former can enjoy provable guarantees under realizable function approximation, the latter is only known to be feasible under much stronger assumptions such as prohibitively expressive discriminators. In this work, we provide guarantees for off-policy function estimation under only realizability, by imposing proper regularization on the MIS objectives. Compared to commonly used regularization in MIS, our regularizer is much more flexible and can account for an arbitrary user-specified distribution, under which the learned function will be close to the groundtruth. We provide exact characterization of the optimal dual solution that needs to be realized by the discriminator class, which determines the data-coverage assumption in the case of value-function learning. As another surprising observation, the regularizer can be altered to relax the data-coverage requirement, and completely eliminate it in the ideal case with strong side information.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
327,001
2405.13997
Sigmoid Gating is More Sample Efficient than Softmax Gating in Mixture of Experts
The softmax gating function is arguably the most popular choice in mixture of experts modeling. Despite its widespread use in practice, the softmax gating may lead to unnecessary competition among experts, potentially causing the undesirable phenomenon of representation collapse due to its inherent structure. In response, the sigmoid gating function has been recently proposed as an alternative and has been demonstrated empirically to achieve superior performance. However, a rigorous examination of the sigmoid gating function is lacking in current literature. In this paper, we verify theoretically that the sigmoid gating, in fact, enjoys a higher sample efficiency than the softmax gating for the statistical task of expert estimation. Towards that goal, we consider a regression framework in which the unknown regression function is modeled as a mixture of experts, and study the rates of convergence of the least squares estimator under the over-specified case in which the number of fitted experts is larger than the true value. We show that two gating regimes naturally arise and, in each of them, we formulate an identifiability condition for the expert functions and derive the corresponding convergence rates. In both cases, we find that experts formulated as feed-forward networks with commonly used activation such as ReLU and GELU enjoy faster convergence rates under the sigmoid gating than those under softmax gating. Furthermore, given the same choice of experts, we demonstrate that the sigmoid gating function requires a smaller sample size than its softmax counterpart to attain the same error of expert estimation and, therefore, is more sample efficient.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
456,188
2110.15278
Self-supervised EEG Representation Learning for Automatic Sleep Staging
Background: Deep learning models have shown great success in automating tasks in sleep medicine by learning from carefully annotated Electroencephalogram (EEG) data. However, effectively utilizing a large amount of raw EEG remains a challenge. Objective: In this paper, we aim to learn robust vector representations from massive unlabeled EEG signals, such that the learned vectorized features (1) are expressive enough to replace the raw signals in the sleep staging task; and (2) provide better predictive performance than supervised models in scenarios of fewer labels and noisy samples. Methods: We propose a self-supervised model, named Contrast with the World Representation (ContraWR), for EEG signal representation learning, which uses global statistics from the dataset to distinguish signals associated with different sleep stages. The ContraWR model is evaluated on three real-world EEG datasets that include both at-home and in-lab EEG recording settings. Results: ContraWR outperforms 4 recent self-supervised learning methods on the sleep staging task across 3 large EEG datasets. ContraWR also beats supervised learning when fewer training labels are available (e.g., 4% accuracy improvement when less than 2% data is labeled). Moreover, the model provides informative representative feature structures in 2D projection. Conclusions: We show that ContraWR is robust to noise and can provide high-quality EEG representations for downstream prediction tasks. The proposed model can be generalized to other unsupervised physiological signal learning tasks. Future directions include exploring task-specific data augmentations and combining self-supervised with supervised methods, building upon the initial success of self-supervised learning in this paper.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
263,831
2012.00202
Field-wise Learning for Multi-field Categorical Data
We propose a new method for learning with multi-field categorical data. Multi-field categorical data are usually collected over many heterogeneous groups. These groups can reflect in the categories under a field. The existing methods try to learn a universal model that fits all data, which is challenging and inevitably results in learning a complex model. In contrast, we propose a field-wise learning method leveraging the natural structure of data to learn simple yet efficient one-to-one field-focused models with appropriate constraints. In doing this, the models can be fitted to each category and thus can better capture the underlying differences in data. We present a model that utilizes linear models with variance and low-rank constraints, to help it generalize better and reduce the number of parameters. The model is also interpretable in a field-wise manner. As the dimensionality of multi-field categorical data can be very high, the models applied to such data are mostly over-parameterized. Our theoretical analysis can potentially explain the effect of over-parametrization on the generalization of our model. It also supports the variance constraints in the learning objective. The experiment results on two large-scale datasets show the superior performance of our model, the trend of the generalization error bound, and the interpretability of learning outcomes. Our code is available at https://github.com/lzb5600/Field-wise-Learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
209,054
2502.07645
Beyond Behavior Cloning: Robustness through Interactive Imitation and Contrastive Learning
Behavior cloning (BC) traditionally relies on demonstration data, assuming the demonstrated actions are optimal. This can lead to overfitting under noisy data, particularly when expressive models are used (e.g., the energy-based model in Implicit BC). To address this, we extend behavior cloning into an iterative process of optimal action estimation within the Interactive Imitation Learning framework. Specifically, we introduce Contrastive policy Learning from Interactive Corrections (CLIC). CLIC leverages human corrections to estimate a set of desired actions and optimizes the policy to select actions from this set. We provide theoretical guarantees for the convergence of the desired action set to optimal actions in both single and multiple optimal action cases. Extensive simulation and real-robot experiments validate CLIC's advantages over existing state-of-the-art methods, including stable training of energy-based models, robustness to feedback noise, and adaptability to diverse feedback types beyond demonstrations. Our code will be publicly available soon.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
532,700
2009.11027
KoBE: Knowledge-Based Machine Translation Evaluation
We propose a simple and effective method for machine translation evaluation which does not require reference translations. Our approach is based on (1) grounding the entity mentions found in each source sentence and candidate translation against a large-scale multilingual knowledge base, and (2) measuring the recall of the grounded entities found in the candidate vs. those found in the source. Our approach achieves the highest correlation with human judgements on 9 out of the 18 language pairs from the WMT19 benchmark for evaluation without references, which is the largest number of wins for a single evaluation method on this task. On 4 language pairs, we also achieve higher correlation with human judgements than BLEU. To foster further research, we release a dataset containing 1.8 million grounded entity mentions across 18 language pairs from the WMT19 metrics track data.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
197,050
1803.03186
RAN Enablers for 5G Radio Resource Management
This paper presents the description of several key RAN enablers for the radio resource management (RRM) framework of the fifth generation (5G) radio access network (RAN), referred to as building blocks of the 5G RRM. In particular, the following key RAN enablers are discussed: i) interference management techniques for dense and dynamic deployments, focusing on cell-edge performance enhancement; ii) dynamic traffic steering mechanisms that aim to attain the optimum mapping of 5G services to any available resources when and where needed by considering the peculiarities of different air interface variants (AIVs); iii) resource management strategies that deal with network slices; and iv) tight interworking between novel 5G AIVs and evolved legacy AIVs such as Long-term Evolution (LTE). Evaluation results for each of these key RAN enablers are also presented.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
92,203
2012.10711
Quantum reinforcement learning in continuous action space
Quantum reinforcement learning (QRL) is one promising algorithm proposed for near-term quantum devices. Early QRL proposals are effective at solving problems in discrete action space, but often suffer from the curse of dimensionality in the continuous domain due to discretization. To address this problem, we propose a quantum Deep Deterministic Policy Gradient algorithm that is efficient at solving both classical and quantum sequential decision problems in the continuous domain. As an application, our method can solve the quantum state-generation problem in a single shot: it only requires a one-shot optimization to generate a model that outputs the desired control sequence for arbitrary target state. In comparison, the standard quantum control method requires optimizing for each target state. Moreover, our method can also be used to physically reconstruct an unknown quantum state.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
212,420
1706.06987
A Generative Model of Group Conversation
Conversations with non-player characters (NPCs) in games are typically confined to dialogue between a human player and a virtual agent, where the conversation is initiated and controlled by the player. To create richer, more believable environments for players, we need conversational behavior to reflect initiative on the part of the NPCs, including conversations that include multiple NPCs who interact with one another as well as the player. We describe a generative computational model of group conversation between agents, an abstract simulation of discussion in a small group setting. We define conversational interactions in terms of rules for turn taking and interruption, as well as belief change, sentiment change, and emotional response, all of which are dependent on agent personality, context, and relationships. We evaluate our model using a parameterized expressive range analysis, observing correlations between simulation parameters and features of the resulting conversations. This analysis confirms, for example, that character personalities will predict how often they speak, and that heterogeneous groups of characters will generate more belief change.
true
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
75,779
2207.00740
PhilaeX: Explaining the Failure and Success of AI Models in Malware Detection
The explanation to an AI model's prediction used to support decision making in cyber security, is of critical importance. It is especially so when the model's incorrect prediction can lead to severe damages or even losses to lives and critical assets. However, most existing AI models lack the ability to provide explanations on their prediction results, despite their strong performance in most scenarios. In this work, we propose a novel explainable AI method, called PhilaeX, that provides the heuristic means to identify the optimized subset of features to form the complete explanations of AI models' predictions. It identifies the features that lead to the model's borderline prediction, and those with positive individual contributions are extracted. The feature attributions are then quantified through the optimization of a Ridge regression model. We verify the explanation fidelity through two experiments. First, we assess our method's capability in correctly identifying the activated features in the adversarial samples of Android malwares, through the features attribution values from PhilaeX. Second, the deduction and augmentation tests, are used to assess the fidelity of the explanations. The results show that PhilaeX is able to explain different types of classifiers correctly, with higher fidelity explanations, compared to the state-of-the-arts methods such as LIME and SHAP.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
305,872
1807.00686
YH Technologies at ActivityNet Challenge 2018
This notebook paper presents an overview and comparative analysis of our systems designed for the following five tasks in ActivityNet Challenge 2018: temporal action proposals, temporal action localization, dense-captioning events in videos, trimmed action recognition, and spatio-temporal action localization.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
101,896
2009.07165
Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses
As predictive models are increasingly being deployed in high-stakes decision-making, there has been a lot of interest in developing algorithms which can provide recourses to affected individuals. While developing such tools is important, it is even more critical to analyse and interpret a predictive model, and vet it thoroughly to ensure that the recourses it offers are meaningful and non-discriminatory before it is deployed in the real world. To this end, we propose a novel model agnostic framework called Actionable Recourse Summaries (AReS) to construct global counterfactual explanations which provide an interpretable and accurate summary of recourses for the entire population. We formulate a novel objective which simultaneously optimizes for correctness of the recourses and interpretability of the explanations, while minimizing overall recourse costs across the entire population. More specifically, our objective enables us to learn, with optimality guarantees on recourse correctness, a small number of compact rule sets each of which capture recourses for well defined subpopulations within the data. We also demonstrate theoretically that several of the prior approaches proposed to generate recourses for individuals are special cases of our framework. Experimental evaluation with real world datasets and user studies demonstrate that our framework can provide decision makers with a comprehensive overview of recourses corresponding to any black box model, and consequently help detect undesirable model biases and discrimination.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
195,851
2007.05428
Joint Blind Deconvolution and Robust Principal Component Analysis for Blood Flow Estimation in Medical Ultrasound Imaging
This paper addresses the problem of high-resolution Doppler blood flow estimation from an ultrafast sequence of ultrasound images. Formulating the separation of clutter and blood components as an inverse problem has been shown in the literature to be a good alternative to spatio-temporal singular value decomposition (SVD)-based clutter filtering. In particular, a deconvolution step has recently been embedded in such a problem to mitigate the influence of the experimentally measured point spread function (PSF) of the imaging system. Deconvolution was shown in this context to improve the accuracy of the blood flow reconstruction. However, measuring the PSF requires non-trivial experimental setups. To overcome this limitation, we propose herein a blind deconvolution method able to estimate both the blood component and the PSF from Doppler data. Numerical experiments conducted on simulated and in vivo data demonstrate qualitatively and quantitatively the effectiveness of the proposed approach in comparison with the previous method based on experimentally measured PSF and two other state-of-the-art approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
186,671
1712.10136
Learning Deep and Compact Models for Gesture Recognition
We look at the problem of developing a compact and accurate model for gesture recognition from videos in a deep-learning framework. Towards this we propose a joint 3DCNN-LSTM model that is end-to-end trainable and is shown to be better suited to capture the dynamic information in actions. The solution achieves close to state-of-the-art accuracy on the ChaLearn dataset, with only half the model size. We also explore ways to derive a much more compact representation in a knowledge distillation framework followed by model compression. The final model is less than $1~MB$ in size, which is less than one hundredth of our initial model, with a drop of $7\%$ in accuracy, and is suitable for real-time gesture recognition on mobile devices.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
87,456
2209.02681
How important are activation functions in regression and classification? A survey, performance comparison, and future directions
Inspired by biological neurons, the activation functions play an essential part in the learning process of any artificial neural network commonly used in many real-world problems. Various activation functions have been proposed in the literature for classification as well as regression tasks. In this work, we survey the activation functions that have been employed in the past as well as the current state-of-the-art. In particular, we present various developments in activation functions over the years and the advantages as well as disadvantages or limitations of these activation functions. We also discuss classical (fixed) activation functions, including rectifier units, and adaptive activation functions. In addition to discussing the taxonomy of activation functions based on characterization, a taxonomy of activation functions based on applications is presented. To this end, the systematic comparison of various fixed and adaptive activation functions is performed for classification data sets such as the MNIST, CIFAR-10, and CIFAR- 100. In recent years, a physics-informed machine learning framework has emerged for solving problems related to scientific computations. For this purpose, we also discuss various requirements for activation functions that have been used in the physics-informed machine learning framework. Furthermore, various comparisons are made among different fixed and adaptive activation functions using various machine learning libraries such as TensorFlow, Pytorch, and JAX.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
316,276
2411.10004
EyeDiff: text-to-image diffusion model improves rare eye disease diagnosis
The rising prevalence of vision-threatening retinal diseases poses a significant burden on the global healthcare systems. Deep learning (DL) offers a promising solution for automatic disease screening but demands substantial data. Collecting and labeling large volumes of ophthalmic images across various modalities encounters several real-world challenges, especially for rare diseases. Here, we introduce EyeDiff, a text-to-image model designed to generate multimodal ophthalmic images from natural language prompts and evaluate its applicability in diagnosing common and rare diseases. EyeDiff is trained on eight large-scale datasets using the advanced latent diffusion model, covering 14 ophthalmic image modalities and over 80 ocular diseases, and is adapted to ten multi-country external datasets. The generated images accurately capture essential lesional characteristics, achieving high alignment with text prompts as evaluated by objective metrics and human experts. Furthermore, integrating generated images significantly enhances the accuracy of detecting minority classes and rare eye diseases, surpassing traditional oversampling methods in addressing data imbalance. EyeDiff effectively tackles the issue of data imbalance and insufficiency typically encountered in rare diseases and addresses the challenges of collecting large-scale annotated images, offering a transformative solution to enhance the development of expert-level diseases diagnosis models in ophthalmic field.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
508,453
2406.12387
Performant ASR Models for Medical Entities in Accented Speech
Recent strides in automatic speech recognition (ASR) have accelerated their application in the medical domain where their performance on accented medical named entities (NE) such as drug names, diagnoses, and lab results, is largely unknown. We rigorously evaluate multiple ASR models on a clinical English dataset of 93 African accents. Our analysis reveals that despite some models achieving low overall word error rates (WER), errors in clinical entities are higher, potentially posing substantial risks to patient safety. To empirically demonstrate this, we extract clinical entities from transcripts, develop a novel algorithm to align ASR predictions with these entities, and compute medical NE Recall, medical WER, and character error rate. Our results show that fine-tuning on accented clinical speech improves medical WER by a wide margin (25-34 % relative), improving their practical applicability in healthcare environments.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
465,379
1506.06068
A general framework for the IT-based clustering methods
Previously, we proposed a physically inspired rule to organize the data points in a sparse yet effective structure, called the in-tree (IT) graph, which is able to capture a wide class of underlying cluster structures in the datasets, especially for the density-based datasets. Although there are some redundant edges or lines between clusters requiring to be removed by computer, this IT graph has a big advantage compared with the k-nearest-neighborhood (k-NN) or the minimal spanning tree (MST) graph, in that the redundant edges in the IT graph are much more distinguishable and thus can be easily determined by several methods previously proposed by us. In this paper, we propose a general framework to re-construct the IT graph, based on an initial neighborhood graph, such as the k-NN or MST, etc, and the corresponding graph distances. For this general framework, our previous way of constructing the IT graph turns out to be a special case of it. This general framework 1) can make the IT graph capture a wider class of underlying cluster structures in the datasets, especially for the manifolds, and 2) should be more effective to cluster the sparse or graph-based datasets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
44,375
1101.3929
Characteristic Generators and Dualization for Tail-Biting Trellises
This paper focuses on dualizing tail-biting trellises, particularly KV-trellises. These trellises are based on characteristic generators, as introduced by Koetter/Vardy (2003), and may be regarded as a natural generalization of minimal conventional trellises, even though they are not necessarily minimal. Two dualization techniques will be investigated: the local dualization, introduced by Forney (2001) for general normal graphs, and a linear algebra based dualization tailored to the specific class of tail-biting BCJR-trellises, introduced by Nori/Shankar (2006). It turns out that, in general, the BCJR-dual is a subtrellis of the local dual, while for KV-trellises these two coincide. Furthermore, making use of both the BCJR-construction and the local dualization, it will be shown that for each complete set of characteristic generators of a code there exists a complete set of characteristic generators of the dual code such that their resulting KV-trellises are dual to each other if paired suitably. This proves a stronger version of a conjecture formulated by Koetter/Vardy.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
8,867
2404.14092
Multi-agent Reinforcement Learning-based Joint Precoding and Phase Shift Optimization for RIS-aided Cell-Free Massive MIMO Systems
Cell-free (CF) massive multiple-input multiple-output (mMIMO) is a promising technique for achieving high spectral efficiency (SE) using multiple distributed access points (APs). However, harsh propagation environments often lead to significant communication performance degradation due to high penetration loss. To overcome this issue, we introduce the reconfigurable intelligent surface (RIS) into the CF mMIMO system as a low-cost and power-efficient solution. In this paper, we focus on optimizing the joint precoding design of the RIS-aided CF mMIMO system to maximize the sum SE. This involves optimizing the precoding matrix at the APs and the reflection coefficients at the RIS. To tackle this problem, we propose a fully distributed multi-agent reinforcement learning (MARL) algorithm that incorporates fuzzy logic (FL). Unlike conventional approaches that rely on alternating optimization techniques, our FL-based MARL algorithm only requires local channel state information, which reduces the need for high backhaul capacity. Simulation results demonstrate that our proposed FL-MARL algorithm effectively reduces computational complexity while achieving similar performance as conventional MARL methods.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
448,570
2002.04169
Edge Cache-assisted Secure Low-Latency Millimeter Wave Transmission
In this paper, we consider an edge cache-assisted millimeter wave cloud radio access network (C-RAN). Each remote radio head (RRH) in the C-RAN has a local cache, which can pre-fetch and store the files requested by the actuators. Multiple RRHs form a cluster to cooperatively serve the actuators, which acquire their required files either from the local caches or from the central processor via multicast fronthaul links. For such a scenario, we formulate a beamforming design problem to minimize the secure transmission delay under transmit power constraint of each RRH. Due to the difficulty of directly solving the formulated problem, we divide it into two independent ones: {\textit{i)}} minimizing the fronthaul transmission delay by jointly optimizing the transmit and receive beamforming; {\textit{ii)}} minimizing the maximum access transmission delay by jointly designing cooperative beamforming among RRHs. An alternatively iterative algorithm is proposed to solve the first optimization problem. For the latter, we first design the analog beamforming based on the channel state information of the actuators. Then, with the aid of successive convex approximation and $S$-procedure techniques, a semidefinite program (SDP) is formulated, and an iterative algorithm is proposed through SDP relaxation. Finally, simulation results are provided to verify the performance of the proposed schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
163,527
2305.10472
Nine tips for ecologists using machine learning
Due to their high predictive performance and flexibility, machine learning models are an appropriate and efficient tool for ecologists. However, implementing a machine learning model is not yet a trivial task and may seem intimidating to ecologists with no previous experience in this area. Here we provide a series of tips to help ecologists in implementing machine learning models. We focus on classification problems as many ecological studies aim to assign data into predefined classes such as ecological states or biological entities. Each of the nine tips identifies a common error, trap or challenge in developing machine learning models and provides recommendations to facilitate their use in ecological studies.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
365,092
2309.00625
Identifying Secure Operating Ranges for DER Control using Bilevel Optimization
Active distribution grids are accommodating an increasing number of controllable electric loads and distributed energy resources (DERs). A majority of these DERs are managed by entities other than the distribution utility, such as individual customers or third-party aggregators, who control the loads and DERs without consideration of any distribution grid constraints. This makes it challenging for a distribution system operator (DSO) to allow third-party aggregators and transmission operators to fully exploit the flexibility offered by these resources while also ensuring that distribution grid constraints such as voltage magnitude limits are not violated. In this paper, we develop a bilevel optimization-based framework to determine the aggregate power flexibility that can be obtained from an unbalanced distribution grid while ensuring that there is no disaggregation solution that leads to grid constraint violations. The results are a set of constraints and operating rules that are easy to communicate, and which provide the entities that procure flexibility from DERs (e.g. transmission operators or third-party aggregators) with the ability to freely implement their own disaggregation strategy without intervention from the DSO. The proposed approach is tested on two unbalanced distribution feeders and our simulation results indicate that it is possible to determine a wide range of aggregate power flexibility, as long as a simple set of rules for DER control activation are followed.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
389,362
2406.05510
Representation Learning with Conditional Information Flow Maximization
This paper proposes an information-theoretic representation learning framework, named conditional information flow maximization, to extract noise-invariant sufficient representations for the input data and target task. It promotes the learned representations have good feature uniformity and sufficient predictive ability, which can enhance the generalization of pre-trained language models (PLMs) for the target task. Firstly, an information flow maximization principle is proposed to learn more sufficient representations for the input and target by simultaneously maximizing both input-representation and representation-label mutual information. Unlike the information bottleneck, we handle the input-representation information in an opposite way to avoid the over-compression issue of latent representations. Besides, to mitigate the negative effect of potential redundant features from the input, we design a conditional information minimization principle to eliminate negative redundant features while preserve noise-invariant features. Experiments on 13 language understanding benchmarks demonstrate that our method effectively improves the performance of PLMs for classification and regression. Extensive experiments show that the learned representations are more sufficient, robust and transferable.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
462,165
2303.01313
Weakly-supervised HOI Detection via Prior-guided Bi-level Representation Learning
Human object interaction (HOI) detection plays a crucial role in human-centric scene understanding and serves as a fundamental building-block for many vision tasks. One generalizable and scalable strategy for HOI detection is to use weak supervision, learning from image-level annotations only. This is inherently challenging due to ambiguous human-object associations, large search space of detecting HOIs and highly noisy training signal. A promising strategy to address those challenges is to exploit knowledge from large-scale pretrained models (e.g., CLIP), but a direct knowledge distillation strategy~\citep{liao2022gen} does not perform well on the weakly-supervised setting. In contrast, we develop a CLIP-guided HOI representation capable of incorporating the prior knowledge at both image level and HOI instance level, and adopt a self-taught mechanism to prune incorrect human-object associations. Experimental results on HICO-DET and V-COCO show that our method outperforms the previous works by a sizable margin, showing the efficacy of our HOI representation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
348,923