id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2108.05876
An Early Look at the Gettr Social Network
This paper presents the first data-driven analysis of Gettr, a new social network platform launched by former US President Donald Trump's team. Among other things, we find that users on the platform heavily discuss politics, with a focus on the Trump campaign in the US and Bolsonaro's in Brazil. Activity on the platform has steadily been decreasing since its launch, although a core of verified users and early adopters kept posting and become central to it. Finally, although toxicity has been increasing over time, the average level of toxicity is still lower than the one recently observed on other fringe social networks like Gab and 4chan. Overall, we provide a first quantitative look at this new community, observing a lack of organic engagement and activity.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
250,440
2302.09240
Beamforming and Phase Shift Design for HR-IRS-aided Directional Modulation Network with a Malicious Attacker
In this paper, we propose to use hybrid relay-intelligent reflecting surface (HR-IRS) to improve the security performance of directional modulation (DM) system. In particular, the eavesdropper in this system works in full-duplex (FD) mode and he will eavesdrop on the confidential message (CM) as well as send malicious jamming. We aim to maximize the secrecy rate (SR) by jointly optimizing the receive beamforming, transmit beamforming and phase shift matrix (PSM) of HR-IRS. Since the optimization problem is un-convex and the variables are coupled to each other, we solve this problem by iteratively optimizing these variables. The receive beamforming and transmit beamforming are obtained based on generalized Rayleigh-Ritz theorem and Dinkelbach's Transform respectively. And for PSM, two methods, called separate optimization of PSM (SO-PSM) and joint optimization of PSM (JO-PSM) are proposed. Thus, two iterative algorithms are proposed accordingly, namely maximizing SR based on SO-PSM (Max-SR-SOP) and maximizing SR based on JO-PSM (Max-SR-JOP). The former has better performance and the latter has lower complexity. The simulation results show that when HR-IRS has sufficient power budget, the proposed Max-SR-SOP and Max-SR-JOP can enable HR-IRS-aided DM network to obtain higher SR than passive IRS-aided DM network.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
346,336
2501.04155
MM-GEN: Enhancing Task Performance Through Targeted Multimodal Data Curation
Vision-language models (VLMs) are highly effective but often underperform on specialized tasks; for example, Llava-1.5 struggles with chart and diagram understanding due to scarce task-specific training data. Existing training data, sourced from general-purpose datasets, fails to capture the nuanced details needed for these tasks. We introduce MM-Gen, a scalable method that generates task-specific, high-quality synthetic text for candidate images by leveraging stronger models. MM-Gen employs a three-stage targeted process: partitioning data into subgroups, generating targeted text based on task descriptions, and filtering out redundant and outlier data. Fine-tuning VLMs with data generated by MM-Gen leads to significant performance gains, including 29% on spatial reasoning and 15% on diagram understanding for Llava-1.5 (7B). Compared to human-curated caption data, MM-Gen achieves up to 1.6x better improvements for the original models, proving its effectiveness in enhancing task-specific VLM performance and bridging the gap between general-purpose datasets and specialized requirements. Code available at https://github.com/sjoshi804/MM-Gen.
false
false
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
523,111
2106.13500
TableSense: Spreadsheet Table Detection with Convolutional Neural Networks
Spreadsheet table detection is the task of detecting all tables on a given sheet and locating their respective ranges. Automatic table detection is a key enabling technique and an initial step in spreadsheet data intelligence. However, the detection task is challenged by the diversity of table structures and table layouts on the spreadsheet. Considering the analogy between a cell matrix as spreadsheet and a pixel matrix as image, and encouraged by the successful application of Convolutional Neural Networks (CNN) in computer vision, we have developed TableSense, a novel end-to-end framework for spreadsheet table detection. First, we devise an effective cell featurization scheme to better leverage the rich information in each cell; second, we develop an enhanced convolutional neural network model for table detection to meet the domain-specific requirement on precise table boundary detection; third, we propose an effective uncertainty metric to guide an active learning based smart sampling algorithm, which enables the efficient build-up of a training dataset with 22,176 tables on 10,220 sheets with broad coverage of diverse table structures and layouts. Our evaluation shows that TableSense is highly effective with 91.3\% recall and 86.5\% precision in EoB-2 metric, a significant improvement over both the current detection algorithm that are used in commodity spreadsheet tools and state-of-the-art convolutional neural networks in computer vision.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
243,095
2105.07961
Joint Optimization of Hadamard Sensing and Reconstruction in Compressed Sensing Fluorescence Microscopy
Compressed sensing fluorescence microscopy (CS-FM) proposes a scheme whereby less measurements are collected during sensing and reconstruction is performed to recover the image. Much work has gone into optimizing the sensing and reconstruction portions separately. We propose a method of jointly optimizing both sensing and reconstruction end-to-end under a total measurement constraint, enabling learning of the optimal sensing scheme concurrently with the parameters of a neural network-based reconstruction network. We train our model on a rich dataset of confocal, two-photon, and wide-field microscopy images comprising of a variety of biological samples. We show that our method outperforms several baseline sensing schemes and a regularized regression reconstruction algorithm.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
235,612
2407.11353
Preconditioned Gradient Descent Finds Over-Parameterized Neural Networks with Sharp Generalization for Nonparametric Regression
We consider nonparametric regression by an over-parameterized two-layer neural network trained by gradient descent (GD) or its variant in this paper. We show that, if the neural network is trained with a novel Preconditioned Gradient Descent (PGD) with early stopping and the target function has spectral bias widely studied in the deep learning literature, the trained network renders a particularly sharp generalization bound with a minimax optimal rate of $\cO({1}/{n^{4\alpha/(4\alpha+1)}})$, which is sharper the current standard rate of $\cO({1}/{n^{2\alpha/(2\alpha+1)}})$ with $2\alpha = d/(d-1)$ when the data is distributed uniformly on the unit sphere in $\RR^d$ and $n$ is the size of the training data. When the target function has no spectral bias, we prove that neural network trained with regular GD with early stopping still enjoys minimax optimal rate, and in this case our results do not require distributional assumptions in contrast with the current known results. Our results are built upon two significant technical contributions. First, uniform convergence to the NTK is established during the training process by PGD or GD, so that we can have a nice decomposition of the neural network function at any step of GD or PGD into a function in the RKHS and an error function with a small $L^{\infty}$-norm. Second, local Rademacher complexity is employed to tightly bound the Rademacher complexity of the function class comprising all the possible neural network functions obtained by GD or PGD. Our results also indicate that PGD can be another way of avoiding the usual linear regime of NTK and obtaining sharper generalization bound, because PGD induces a different kernel with lower kernel complexity during the training than the regular NTK induced by the network architecture trained by regular GD.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
473,408
2104.08540
DWUG: A large Resource of Diachronic Word Usage Graphs in Four Languages
Word meaning is notoriously difficult to capture, both synchronically and diachronically. In this paper, we describe the creation of the largest resource of graded contextualized, diachronic word meaning annotation in four different languages, based on 100,000 human semantic proximity judgments. We thoroughly describe the multi-round incremental annotation process, the choice for a clustering algorithm to group usages into senses, and possible - diachronic and synchronic - uses for this dataset.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
230,842
1904.02390
Interaction-aware Multi-agent Tracking and Probabilistic Behavior Prediction via Adversarial Learning
In order to enable high-quality decision making and motion planning of intelligent systems such as robotics and autonomous vehicles, accurate probabilistic predictions for surrounding interactive objects is a crucial prerequisite. Although many research studies have been devoted to making predictions on a single entity, it remains an open challenge to forecast future behaviors for multiple interactive agents simultaneously. In this work, we take advantage of the Generative Adversarial Network (GAN) due to its capability of distribution learning and propose a generic multi-agent probabilistic prediction and tracking framework which takes the interactions among multiple entities into account, in which all the entities are treated as a whole. However, since GAN is very hard to train, we make an empirical research and present the relationship between training performance and hyperparameter values with a numerical case study. The results imply that the proposed model can capture both the mean, variance and multi-modalities of the groundtruth distribution. Moreover, we apply the proposed approach to a real-world task of vehicle behavior prediction to demonstrate its effectiveness and accuracy. The results illustrate that the proposed model trained by adversarial learning can achieve a better prediction performance than other state-of-the-art models trained by traditional supervised learning which maximizes the data likelihood. The well-trained model can also be utilized as an implicit proposal distribution for particle filtered based Bayesian state estimation.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
126,419
2205.07043
Naturalistic Causal Probing for Morpho-Syntax
Probing has become a go-to methodology for interpreting and analyzing deep neural models in natural language processing. However, there is still a lack of understanding of the limitations and weaknesses of various types of probes. In this work, we suggest a strategy for input-level intervention on naturalistic sentences. Using our approach, we intervene on the morpho-syntactic features of a sentence, while keeping the rest of the sentence unchanged. Such an intervention allows us to causally probe pre-trained models. We apply our naturalistic causal probing framework to analyze the effects of grammatical gender and number on contextualized representations extracted from three pre-trained models in Spanish: the multilingual versions of BERT, RoBERTa, and GPT-2. Our experiments suggest that naturalistic interventions lead to stable estimates of the causal effects of various linguistic properties. Moreover, our experiments demonstrate the importance of naturalistic causal probing when analyzing pre-trained models.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
296,455
2406.07455
Reinforcement Learning from Human Feedback without Reward Inference: Model-Free Algorithm and Instance-Dependent Analysis
In this paper, we study reinforcement learning from human feedback (RLHF) under an episodic Markov decision process with a general trajectory-wise reward model. We developed a model-free RLHF best policy identification algorithm, called $\mathsf{BSAD}$, without explicit reward model inference, which is a critical intermediate step in the contemporary RLHF paradigms for training large language models (LLM). The algorithm identifies the optimal policy directly from human preference information in a backward manner, employing a dueling bandit sub-routine that constantly duels actions to identify the superior one. $\mathsf{BSAD}$ adopts a reward-free exploration and best-arm-identification-like adaptive stopping criteria to equalize the visitation among all states in the same decision step while moving to the previous step as soon as the optimal action is identifiable, leading to a provable, instance-dependent sample complexity $\tilde{\mathcal{O}}(c_{\mathcal{M}}SA^3H^3M\log\frac{1}{\delta})$ which resembles the result in classic RL, where $c_{\mathcal{M}}$ is the instance-dependent constant and $M$ is the batch size. Moreover, $\mathsf{BSAD}$ can be transformed into an explore-then-commit algorithm with logarithmic regret and generalized to discounted MDPs using a frame-based approach. Our results show: (i) sample-complexity-wise, RLHF is not significantly harder than classic RL and (ii) end-to-end RLHF may deliver improved performance by avoiding pitfalls in reward inferring such as overfit and distribution shift.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
463,054
2501.08795
Heat transfer simulation of window frames with SPHinXsys
Maintaining a comfortable temperature inside a building requires appropriate thermal insulation of windows, which can be optimised iteratively with numerical simulation. Smoothed particle hydrodynamics(SPH) is a fully Lagrangian method widely used for simulating multi-physics applications with high computational efficiency and accuracy. It is advantageous in physically coupled problems such as heat-fluid-solid or any other type of physically coupled simulations. The focus of this study is to simulate the heat transfer process in various window frames under convective boundary conditions according to ISO10077-2:2012. This paper demonstrates the accuracy and compatibility of SPH when dealing with heat transfer problems, which ensures further development of thermal coupling with other physical fields. The results and methods used in this paper provide some guidance on how to properly handle heat transfer simulations using SPH, which can be extended to multi-physics coupled simulations in the future.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
524,898
1907.08049
Towards $k$-connectivity in Heterogeneous Sensor Networks under Pairwise Key Predistribution
We study the secure and reliable connectivity of wireless sensor networks under the heterogeneous pairwise key predistribution scheme. This scheme was recently introduced as an extension of the random pairwise key predistribution scheme of Chan et al. to accommodate networks where the constituent sensors have different capabilities or requirements for security and connectivity. For simplicity, we consider a heterogeneous network where each of the $n$ sensors is classified as type-1 (respectively, type-2) with probability $\mu$ (respectively, $1-\mu)$ where $0<\mu<1$. Each type-1 (respectively, type-2) node selects 1 (respectively, $K_n$) other nodes uniformly at random to be paired with; according to the pairwise scheme each pair is then assigned a unique pairwise key so that they can securely communicate with each other. We establish critical conditions on $n, \mu$, and $K_n$ such that the resulting network has minimum node degree of at least $k$ with high probability in the limit of large network size. Our result constitutes a zero-one law for the minimum node degree of the recently introduced inhomogeneous random K-out graph model. This constitutes a crucial step towards establishing a similar zero-one law for the $k$-connectivity of the graph; i.e., for the property that the network remains connected despite the failure of any $k-1$ nodes or links. We present numerical results that indicate the usefulness of our results in selecting the parameters of the scheme in practical settings with finite number of sensors.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
139,018
2003.11266
Auto-Ensemble: An Adaptive Learning Rate Scheduling based Deep Learning Model Ensembling
Ensembling deep learning models is a shortcut to promote its implementation in new scenarios, which can avoid tuning neural networks, losses and training algorithms from scratch. However, it is difficult to collect sufficient accurate and diverse models through once training. This paper proposes Auto-Ensemble (AE) to collect checkpoints of deep learning model and ensemble them automatically by adaptive learning rate scheduling algorithm. The advantage of this method is to make the model converge to various local optima by scheduling the learning rate in once training. When the number of lo-cal optimal solutions tends to be saturated, all the collected checkpoints are used for ensemble. Our method is universal, it can be applied to various scenarios. Experiment results on multiple datasets and neural networks demonstrate it is effective and competitive, especially on few-shot learning. Besides, we proposed a method to measure the distance among models. Then we can ensure the accuracy and diversity of collected models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
169,563
2111.14651
Multi-objective Explanations of GNN Predictions
Graph Neural Network (GNN) has achieved state-of-the-art performance in various high-stake prediction tasks, but multiple layers of aggregations on graphs with irregular structures make GNN a less interpretable model. Prior methods use simpler subgraphs to simulate the full model, or counterfactuals to identify the causes of a prediction. The two families of approaches aim at two distinct objectives, "simulatability" and "counterfactual relevance", but it is not clear how the objectives can jointly influence the human understanding of an explanation. We design a user study to investigate such joint effects and use the findings to design a multi-objective optimization (MOO) algorithm to find Pareto optimal explanations that are well-balanced in simulatability and counterfactual. Since the target model can be of any GNN variants and may not be accessible due to privacy concerns, we design a search algorithm using zeroth-order information without accessing the architecture and parameters of the target model. Quantitative experiments on nine graphs from four applications demonstrate that the Pareto efficient explanations dominate single-objective baselines that use first-order continuous optimization or discrete combinatorial search. The explanations are further evaluated in robustness and sensitivity to show their capability of revealing convincing causes while being cautious about the possible confounders. The diverse dominating counterfactuals can certify the feasibility of algorithmic recourse, that can potentially promote algorithmic fairness where humans are participating in the decision-making using GNN.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
268,662
2010.08365
Toward Accurate Person-level Action Recognition in Videos of Crowded Scenes
Detecting and recognizing human action in videos with crowded scenes is a challenging problem due to the complex environment and diversity events. Prior works always fail to deal with this problem in two aspects: (1) lacking utilizing information of the scenes; (2) lacking training data in the crowd and complex scenes. In this paper, we focus on improving spatio-temporal action recognition by fully-utilizing the information of scenes and collecting new data. A top-down strategy is used to overcome the limitations. Specifically, we adopt a strong human detector to detect the spatial location of each frame. We then apply action recognition models to learn the spatio-temporal information from video frames on both the HIE dataset and new data with diverse scenes from the internet, which can improve the generalization ability of our model. Besides, the scenes information is extracted by the semantic segmentation model to assistant the process. As a result, our method achieved an average 26.05 wf\_mAP (ranking 1st place in the ACM MM grand challenge 2020: Human in Events).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
201,161
2402.06064
Formalizing Automated Market Makers in the Lean 4 Theorem Prover
Automated Market Makers (AMMs) are an integral component of the decentralized finance (DeFi) ecosystem, as they allow users to exchange crypto-assets without the need for trusted authorities or external price oracles. Although these protocols are based on relatively simple mechanisms, e.g., to algorithmically determine the exchange rate between crypto-assets, they give rise to complex economic behaviours. This complexity is witnessed by the proliferation of models that study their structural and economic properties. Currently, most of theoretical results obtained on these models are supported by pen-and-paper proofs. This work proposes a formalization of constant-product AMMs in the Lean 4 Theorem Prover. To demonstrate the utility of our model, we provide mechanized proofs of key economic properties like arbitrage, that at the best of our knowledge have only been proved by pen-and-paper before.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
428,145
2405.10051
MarkLLM: An Open-Source Toolkit for LLM Watermarking
LLM watermarking, which embeds imperceptible yet algorithmically detectable signals in model outputs to identify LLM-generated text, has become crucial in mitigating the potential misuse of large language models. However, the abundance of LLM watermarking algorithms, their intricate mechanisms, and the complex evaluation procedures and perspectives pose challenges for researchers and the community to easily experiment with, understand, and assess the latest advancements. To address these issues, we introduce MarkLLM, an open-source toolkit for LLM watermarking. MarkLLM offers a unified and extensible framework for implementing LLM watermarking algorithms, while providing user-friendly interfaces to ensure ease of access. Furthermore, it enhances understanding by supporting automatic visualization of the underlying mechanisms of these algorithms. For evaluation, MarkLLM offers a comprehensive suite of 12 tools spanning three perspectives, along with two types of automated evaluation pipelines. Through MarkLLM, we aim to support researchers while improving the comprehension and involvement of the general public in LLM watermarking technology, fostering consensus and driving further advancements in research and application. Our code is available at https://github.com/THU-BPM/MarkLLM.
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
454,633
1912.01629
A Simulation Model for Pedestrian Crowd Evacuation Based on Various AI Techniques
This paper attempts to design an intelligent simulation model for pedestrian crowd evacuation. For this purpose, the cellular automata(CA) was fully integrated with fuzzy logic, the kth nearest neighbors (KNN), and some statistical equations. In this model, each pedestrian was assigned a specific speed, according to his/her physical, biological and emotional features. The emergency behavior and evacuation efficiency of each pedestrian were evaluated by coupling his or her speed with various elements, such as environment, pedestrian distribution and familiarity with the exits. These elements all have great impacts on the evacuation process. Several experiments were carried out to verify the performance of the model in different emergency scenarios. The results show that the proposed model can predict the evacuation time and emergency behavior in various types of building interiors and pedestrian distributions. The research provides a good reference to the design of building evacuation systems.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
156,131
2003.09887
Evaluation of Parameterized Quantum Circuits: on the relation between classification accuracy, expressibility and entangling capability
An active area of investigation in the search for quantum advantage is Quantum Machine Learning. Quantum Machine Learning, and Parameterized Quantum Circuits in a hybrid quantum-classical setup in particular, could bring advancements in accuracy by utilizing the high dimensionality of the Hilbert space as feature space. But is the ability of a quantum circuit to uniformly address the Hilbert space a good indicator of classification accuracy? In our work, we use methods and quantifications from prior art to perform a numerical study in order to evaluate the level of correlation. We find a strong correlation between the ability of the circuit to uniformly address the Hilbert space and the achieved classification accuracy for circuits that entail a single embedding layer followed by 1 or 2 circuit designs. This is based on our study encompassing 19 circuits in both 1 and 2 layer configuration, evaluated on 9 datasets of increasing difficulty. Future work will evaluate if this holds for different circuit designs.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
169,173
1904.03855
Evolved embodied phase coordination enables robust quadruped robot locomotion
Overcoming robotics challenges in the real world requires resilient control systems capable of handling a multitude of environments and unforeseen events. Evolutionary optimization using simulations is a promising way to automatically design such control systems, however, if the disparity between simulation and the real world becomes too large, the optimization process may result in dysfunctional real-world behaviors. In this paper, we address this challenge by considering embodied phase coordination in the evolutionary optimization of a quadruped robot controller based on central pattern generators. With this method, leg phases, and indirectly also inter-leg coordination, are influenced by sensor feedback.By comparing two very similar control systems we gain insight into how the sensory feedback approach affects the evolved parameters of the control system, and how the performances differs in simulation, in transferal to the real world, and to different real-world environments. We show that evolution enables the design of a control system with embodied phase coordination which is more complex than previously seen approaches, and that this system is capable of controlling a real-world multi-jointed quadruped robot.The approach reduces the performance discrepancy between simulation and the real world, and displays robustness towards new environments.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
126,861
2102.10557
Contrastive Self-supervised Neural Architecture Search
This paper proposes a novel cell-based neural architecture search algorithm (NAS), which completely alleviates the expensive costs of data labeling inherited from supervised learning. Our algorithm capitalizes on the effectiveness of self-supervised learning for image representations, which is an increasingly crucial topic of computer vision. First, using only a small amount of unlabeled train data under contrastive self-supervised learning allow us to search on a more extensive search space, discovering better neural architectures without surging the computational resources. Second, we entirely relieve the cost for labeled data (by contrastive loss) in the search stage without compromising architectures' final performance in the evaluation phase. Finally, we tackle the inherent discrete search space of the NAS problem by sequential model-based optimization via the tree-parzen estimator (SMBO-TPE), enabling us to reduce the computational expense response surface significantly. An extensive number of experiments empirically show that our search algorithm can achieve state-of-the-art results with better efficiency in data labeling cost, searching time, and accuracy in final validation.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
221,141
1508.02428
FactorBase: SQL for Learning A Multi-Relational Graphical Model
We describe FactorBase, a new SQL-based framework that leverages a relational database management system to support multi-relational model discovery. A multi-relational statistical model provides an integrated analysis of the heterogeneous and interdependent data resources in the database. We adopt the BayesStore design philosophy: statistical models are stored and managed as first-class citizens inside a database. Whereas previous systems like BayesStore support multi-relational inference, FactorBase supports multi-relational learning. A case study on six benchmark databases evaluates how our system supports a challenging machine learning application, namely learning a first-order Bayesian network model for an entire database. Model learning in this setting has to examine a large number of potential statistical associations across data tables. Our implementation shows how the SQL constructs in FactorBase facilitate the fast, modular, and reliable development of highly scalable model learning systems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
45,900
2206.06722
Specification sketching for Linear Temporal Logic
Virtually all verification and synthesis techniques assume that the formal specifications are readily available, functionally correct, and fully match the engineer's understanding of the given system. However, this assumption is often unrealistic in practice: formalizing system requirements is notoriously difficult, error-prone, and requires substantial training. To alleviate this severe hurdle, we propose a fundamentally novel approach to writing formal specifications, named specification sketching for Linear Temporal Logic (LTL). The key idea is that an engineer can provide a partial LTL formula, called an LTL sketch, where parts that are hard to formalize can be left out. Given a set of examples describing system behaviors that the specification should or should not allow, the task of a so-called sketching algorithm is then to complete a given sketch such that the resulting LTL formula is consistent with the examples. We show that deciding whether a sketch can be completed falls into the complexity class NP and present two SAT-based sketching algorithms. We also demonstrate that sketching is a practical approach to writing formal specifications using a prototype implementation.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
302,476
2404.15213
Automatic Classification of Subjective Time Perception Using Multi-modal Physiological Data of Air Traffic Controllers
In high-pressure environments where human individuals must simultaneously monitor multiple entities, communicate effectively, and maintain intense focus, the perception of time becomes a critical factor influencing performance and well-being. One indicator of well-being can be the person's subjective time perception. In our project $ChronoPilot$, we aim to develop a device that modulates human subjective time perception. In this study, we present a method to automatically assess the subjective time perception of air traffic controllers, a group often faced with demanding conditions, using their physiological data and eleven state-of-the-art machine learning classifiers. The physiological data consist of photoplethysmogram, electrodermal activity, and temperature data. We find that the support vector classifier works best with an accuracy of 79 % and electrodermal activity provides the most descriptive biomarker. These findings are an important step towards closing the feedback loop of our $ChronoPilot$-device to automatically modulate the user's subjective time perception. This technological advancement may promise improvements in task management, stress reduction, and overall productivity in high-stakes professions.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
448,991
2003.13109
Scene-Aware Error Modeling of LiDAR/Visual Odometry for Fusion-based Vehicle Localization
Localization is an essential technique in mobile robotics. In a complex environment, it is necessary to fuse different localization modules to obtain more robust results, in which the error model plays a paramount role. However, exteroceptive sensor-based odometries (ESOs), such as LiDAR/visual odometry, often deliver results with scene-related error, which is difficult to model accurately. To address this problem, this research designs a scene-aware error model for ESO, based on which a multimodal localization fusion framework is developed. In addition, an end-to-end learning method is proposed to train this error model using sparse global poses such as GPS/IMU results. The proposed method is realized for error modeling of LiDAR/visual odometry, and the results are fused with dead reckoning to examine the performance of vehicle localization. Experiments are conducted using both simulation and real-world data of experienced and unexperienced environments, and the experimental results demonstrate that with the learned scene-aware error models, vehicle localization accuracy can be largely improved and shows adaptiveness in unexperienced scenes.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
170,110
2304.05174
Electricity Demand Forecasting with Hybrid Statistical and Machine Learning Algorithms: Case Study of Ukraine
This article presents a novel hybrid approach using statistics and machine learning to forecast the national demand of electricity. As investment and operation of future energy systems require long-term electricity demand forecasts with hourly resolution, our mathematical model fills a gap in energy forecasting. The proposed methodology was constructed using hourly data from Ukraine's electricity consumption ranging from 2013 to 2020. To this end, we analysed the underlying structure of the hourly, daily and yearly time series of electricity consumption. The long-term yearly trend is evaluated using macroeconomic regression analysis. The mid-term model integrates temperature and calendar regressors to describe the underlying structure, and combines ARIMA and LSTM ``black-box'' pattern-based approaches to describe the error term. The short-term model captures the hourly seasonality through calendar regressors and multiple ARMA models for the residual. Results show that the best forecasting model is composed by combining multiple regression models and a LSTM hybrid model for residual prediction. Our hybrid model is very effective at forecasting long-term electricity consumption on an hourly resolution. In two years of out-of-sample forecasts with 17520 timesteps, it is shown to be within 96.83 \% accuracy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
357,525
2207.05228
Uncertainty-Aware Online Merge Planning with Learned Driver Behavior
Safe and reliable autonomy solutions are a critical component of next-generation intelligent transportation systems. Autonomous vehicles in such systems must reason about complex and dynamic driving scenes in real time and anticipate the behavior of nearby drivers. Human driving behavior is highly nuanced and specific to individual traffic participants. For example, drivers might display cooperative or non-cooperative behaviors in the presence of merging vehicles. These behaviors must be estimated and incorporated in the planning process for safe and efficient driving. In this work, we present a framework for estimating the cooperation level of drivers on a freeway and plan merging maneuvers with the drivers' latent behaviors explicitly modeled. The latent parameter estimation problem is solved using a particle filter to approximate the probability distribution over the cooperation level. A partially observable Markov decision process (POMDP) that includes the latent state estimate is solved online to extract a policy for a merging vehicle. We evaluate our method in a high-fidelity automotive simulator against methods that are agnostic to latent states or rely on $\textit{a priori}$ assumptions about actor behavior.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
307,455
2104.11408
Neural Mean Discrepancy for Efficient Out-of-Distribution Detection
Various approaches have been proposed for out-of-distribution (OOD) detection by augmenting models, input examples, training sets, and optimization objectives. Deviating from existing work, we have a simple hypothesis that standard off-the-shelf models may already contain sufficient information about the training set distribution which can be leveraged for reliable OOD detection. Our empirical study on validating this hypothesis, which measures the model activation's mean for OOD and in-distribution (ID) mini-batches, surprisingly finds that activation means of OOD mini-batches consistently deviate more from those of the training data. In addition, training data's activation means can be computed offline efficiently or retrieved from batch normalization layers as a 'free lunch'. Based upon this observation, we propose a novel metric called Neural Mean Discrepancy (NMD), which compares neural means of the input examples and training data. Leveraging the simplicity of NMD, we propose an efficient OOD detector that computes neural means by a standard forward pass followed by a lightweight classifier. Extensive experiments show that NMD outperforms state-of-the-art OOD approaches across multiple datasets and model architectures in terms of both detection accuracy and computational cost.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
231,899
2012.11761
Bounding the Complexity of Formally Verifying Neural Networks: A Geometric Approach
In this paper, we consider the computational complexity of formally verifying the behavior of Rectified Linear Unit (ReLU) Neural Networks (NNs), where verification entails determining whether the NN satisfies convex polytopic specifications. Specifically, we show that for two different NN architectures -- shallow NNs and Two-Level Lattice (TLL) NNs -- the verification problem with (convex) polytopic constraints is polynomial in the number of neurons in the NN to be verified, when all other aspects of the verification problem held fixed. We achieve these complexity results by exhibiting explicit (but similar) verification algorithms for each type of architecture. Both algorithms efficiently translate the NN parameters into a partitioning of the NN's input space by means of hyperplanes; this has the effect of partitioning the original verification problem into polynomially many sub-verification problems derived from the geometry of the neurons. We show that these sub-problems may be chosen so that the NN is purely affine within each, and hence each sub-problem is solvable in polynomial time by means of a Linear Program (LP). Thus, a polynomial-time algorithm for the original verification problem can be obtained using known algorithms for enumerating the regions in a hyperplane arrangement. Finally, we adapt our proposed algorithms to the verification of dynamical systems, specifically when these NN architectures are used as state-feedback controllers for LTI systems. We further evaluate the viability of this approach numerically.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
212,718
2110.04704
3D Object Detection Combining Semantic and Geometric Features from Point Clouds
In this paper, we investigate the combination of voxel-based methods and point-based methods, and propose a novel end-to-end two-stage 3D object detector named SGNet for point clouds scenes. The voxel-based methods voxelize the scene to regular grids, which can be processed with the current advanced feature learning frameworks based on convolutional layers for semantic feature learning. Whereas the point-based methods can better extract the geometric feature of the point due to the coordinate reservations. The combination of the two is an effective solution for 3D object detection from point clouds. However, most current methods use a voxel-based detection head with anchors for final classification and localization. Although the preset anchors cover the entire scene, it is not suitable for point clouds detection tasks with larger scenes and multiple categories due to the limitation of voxel size. In this paper, we propose a voxel-to-point module (VTPM) that captures semantic and geometric features. The VTPM is a Voxel-Point-Based Module that finally implements 3D object detection in point space, which is more conducive to the detection of small-size objects and avoids the presets of anchors in inference stage. In addition, a Confidence Adjustment Module (CAM) with the center-boundary-aware confidence attention is proposed to solve the misalignment between the predicted confidence and proposals in the regions of the interest (RoI) selection. The SGNet proposed in this paper has achieved state-of-the-art results for 3D object detection in the KITTI dataset, especially in the detection of small-size objects such as cyclists. Actually, as of September 19, 2021, for KITTI dataset, SGNet ranked 1st in 3D and BEV detection on cyclists with easy difficulty level, and 2nd in the 3D detection of moderate cyclists.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
260,005
1301.3867
Fast Planning in Stochastic Games
Stochastic games generalize Markov decision processes (MDPs) to a multiagent setting by allowing the state transitions to depend jointly on all player actions, and having rewards determined by multiplayer matrix games at each state. We consider the problem of computing Nash equilibria in stochastic games, the analogue of planning in MDPs. We begin by providing a generalization of finite-horizon value iteration that computes a Nash strategy for each player in generalsum stochastic games. The algorithm takes an arbitrary Nash selection function as input, which allows the translation of local choices between multiple Nash equilibria into the selection of a single global Nash equilibrium. Our main technical result is an algorithm for computing near-Nash equilibria in large or infinite state spaces. This algorithm builds on our finite-horizon value iteration algorithm, and adapts the sparse sampling methods of Kearns, Mansour and Ng (1999) to stochastic games. We conclude by descrbing a counterexample showing that infinite-horizon discounted value iteration, which was shown by shaplely to converge in the zero-sum case (a result we give extend slightly here), does not converge in the general-sum case.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
21,179
1406.1547
Arbitrage-free exchange rate ensembles over a general trade network
It is assumed that under suitable economic and information-theoretic conditions, market exchange rates are free from arbitrage. Commodity markets in which trades occur over a complete graph are shown to be trivial. We therefore examine the vector space of no-arbitrage exchange rate ensembles over an arbitrary connected undirected graph. Consideration is given for the minimal information for determination of an exchange rate ensemble. We conclude with a topical discussion of exchanges in which our analyses may be relevant, including the emergent but highly-regulated (and therefore not a complete graph) market for digital currencies.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
33,645
2102.10513
CheckSoft : A Scalable Event-Driven Software Architecture for Keeping Track of People and Things in People-Centric Spaces
We present CheckSoft, a scalable event-driven software architecture for keeping track of people-object interactions in people-centric applications such as airport checkpoint security areas, automated retail stores, smart libraries, and so on. The architecture works off the video data generated in real time by a network of surveillance cameras. Although there are many different aspects to automating these applications, the most difficult part of the overall problem is keeping track of the interactions between the people and the objects. CheckSoft uses finite-state-machine (FSM) based logic for keeping track of such interactions which allows the system to quickly reject any false detections of the interactions by the video cameras. CheckSoft is easily scalable since the architecture is based on multi-processing in which a separate process is assigned to each human and to each "storage container" for the objects. A storage container may be a shelf on which the objects are displayed or a bin in which the objects are stored, depending on the specific application in which CheckSoft is deployed.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
221,120
2208.14161
Identifiable Latent Causal Content for Domain Adaptation under Latent Covariate Shift
Multi-source domain adaptation (MSDA) addresses the challenge of learning a label prediction function for an unlabeled target domain by leveraging both the labeled data from multiple source domains and the unlabeled data from the target domain. Conventional MSDA approaches often rely on covariate shift or conditional shift paradigms, which assume a consistent label distribution across domains. However, this assumption proves limiting in practical scenarios where label distributions do vary across domains, diminishing its applicability in real-world settings. For example, animals from different regions exhibit diverse characteristics due to varying diets and genetics. Motivated by this, we propose a novel paradigm called latent covariate shift (LCS), which introduces significantly greater variability and adaptability across domains. Notably, it provides a theoretical assurance for recovering the latent cause of the label variable, which we refer to as the latent content variable. Within this new paradigm, we present an intricate causal generative model by introducing latent noises across domains, along with a latent content variable and a latent style variable to achieve more nuanced rendering of observational data. We demonstrate that the latent content variable can be identified up to block identifiability due to its versatile yet distinct causal structure. We anchor our theoretical insights into a novel MSDA method, which learns the label distribution conditioned on the identifiable latent content variable, thereby accommodating more substantial distribution shifts. The proposed approach showcases exceptional performance and efficacy on both simulated and real-world datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
315,239
2401.09763
CLIP Model for Images to Textual Prompts Based on Top-k Neighbors
Text-to-image synthesis, a subfield of multimodal generation, has gained significant attention in recent years. We propose a cost-effective approach for image-to-prompt generation that leverages generative models to generate textual prompts without the need for large amounts of annotated data. We divide our method into two stages: online stage and offline stage. We use a combination of the CLIP model and K-nearest neighbors (KNN) algorithm. The proposed system consists of two main parts: an offline task and an online task. Our method owns the highest metric 0.612 among these models, which is 0.013, 0.055, 0.011 higher than Clip, Clip + KNN(top 10) respectively.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
422,376
2410.15623
Guardians of Discourse: Evaluating LLMs on Multilingual Offensive Language Detection
Identifying offensive language is essential for maintaining safety and sustainability in the social media era. Though large language models (LLMs) have demonstrated encouraging potential in social media analytics, they lack thorough evaluation when in offensive language detection, particularly in multilingual environments. We for the first time evaluate multilingual offensive language detection of LLMs in three languages: English, Spanish, and German with three LLMs, GPT-3.5, Flan-T5, and Mistral, in both monolingual and multilingual settings. We further examine the impact of different prompt languages and augmented translation data for the task in non-English contexts. Furthermore, we discuss the impact of the inherent bias in LLMs and the datasets in the mispredictions related to sensitive topics.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
500,642
2406.16496
Recent advancements on MPC for tracking: periodic and harmonic formulations
The main benefit of model predictive control (MPC) is its ability to steer the system to a given reference without violating the constraints while minimizing some objective. Furthermore, a suitably designed MPC controller guarantees asymptotic stability of the closed-loop system to the given reference as long as its optimization problem is feasible at the initial state of the system. Therefore, one of the limitations of classical MPC is that changing the reference may lead to an unfeasible MPC problem. Furthermore, due to a lack of deep knowledge of the system, it is possible for the user to provide a desired reference that is unfeasible or non-attainable for the MPC controller, leading to the same problem. This chapter summarizes MPC formulations recently proposed that have been designed to address these issues. In particular, thanks to the addition of an artificial reference as decision variable, the formulations achieve asymptotic stability and recursive feasibility guarantees regardless of the reference provided by the user, even if it is changed online or if it violates the system constraints. We show a recent formulation which extends this idea, achieving better performance and larger domains of attraction when working with small prediction horizons. Additional benefits of these formulations, when compared to classical MPC, are also discussed and highlighted with illustrative examples.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
467,153
1805.08707
A syllogistic system for propositions with intermediate quantifiers
This paper describes a formalism that subsumes Peterson's intermediate quantifier syllogistic system, and extends the ideas by van Eijck on Aristotle's logic. Syllogisms are expressed in a concise form making use of and extending the Monotonicity Calculus. Contradictory and contrary relationships are added so that deduction can derive propositions expressing a form of negation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
98,231
2007.05169
Detecting Malicious Accounts in Permissionless Blockchains using Temporal Graph Properties
The temporal nature of modeling accounts as nodes and transactions as directed edges in a directed graph -- for a blockchain, enables us to understand the behavior (malicious or benign) of the accounts. Predictive classification of accounts as malicious or benign could help users of the permissionless blockchain platforms to operate in a secure manner. Motivated by this, we introduce temporal features such as burst and attractiveness on top of several already used graph properties such as the node degree and clustering coefficient. Using identified features, we train various Machine Learning (ML) algorithms and identify the algorithm that performs the best in detecting which accounts are malicious. We then study the behavior of the accounts over different temporal granularities of the dataset before assigning them malicious tags. For Ethereum blockchain, we identify that for the entire dataset - the ExtraTreesClassifier performs the best among supervised ML algorithms. On the other hand, using cosine similarity on top of the results provided by unsupervised ML algorithms such as K-Means on the entire dataset, we were able to detect 554 more suspicious accounts. Further, using behavior change analysis for accounts, we identify 814 unique suspicious accounts across different temporal granularities.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
186,589
2407.01468
Active Shadowing (ASD): Manipulating Visual Perception of Robotics Behaviors via Implicit Communication
Explicit communication is often valued for its directness during interaction. Implicit communication, on the other hand, is indirect in that its communicative content must be inferred. Implicit communication is considered more desirable in teaming situations that requires reduced interruptions for improved fluency. In this paper, we investigate another unique advantage of implicit communication: its ability to manipulate the perception of object or behavior of interest. When communication results in the perception of an object or behavior to deviate from other information (about the object or behavior) available via observation, it introduces a discrepancy between perception and observation. We show that such a discrepancy in visual perception can benefit human-robot interaction in a controlled manner and introduce an approach referred to as active shadowing (ASD). Through user studies, we demonstrate the effectiveness of active shadowing in creating a misaligned perception of the robot's behavior and its execution in the real-world, resulting in more efficient task completion without sacrificing its understandability. We also analyze conditions under which such visual manipulation is effective.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
469,311
1910.00985
A Blueprint for Interoperable Blockchains
Research in blockchain systems has mainly focused on improving security and bridging the performance gaps between blockchains and databases. Despite many promising results, we observe a worrying trend that the blockchain landscape is fragmented in which many systems exist in silos. Apart from a handful of general-purpose blockchains, such as Ethereum or Hyperledger Fabric, there are hundreds of others designed for specific applications and typically do not talk to each other. In this paper, we describe our vision of interoperable blockchains. We argue that supporting interaction among different blockchains requires overcoming challenges that go beyond data standardization. The underlying problem is to allow smart contracts running in different blockchains to communicate. We discuss three open problems: access control, general cross-chain transactions, and cross-chain communication. We describe partial solutions to some of these problems in the literature. Finally, we propose a novel design to overcome these challenges.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
147,816
2406.11290
Iterative Utility Judgment Framework via LLMs Inspired by Relevance in Philosophy
Utility and topical relevance are critical measures in information retrieval (IR), reflecting system and user perspectives, respectively. While topical relevance has long been emphasized, utility is a higher standard of relevance and is more useful for facilitating downstream tasks, e.g., in Retrieval-Augmented Generation (RAG). When we incorporate utility judgments into RAG, we realize that the topical relevance, utility, and answering in RAG are closely related to the three types of relevance that Schutz discussed from a philosophical perspective. They are topical relevance, interpretational relevance, and motivational relevance, respectively. Inspired by the dynamic iterations of the three types of relevance, we propose an Iterative utiliTy judgmEnt fraMework (ITEM) to promote each step of the cycle of RAG. We conducted extensive experiments on multi-grade passage retrieval and factoid question-answering datasets (i.e., TREC DL, WebAP, and NQ). Experimental results demonstrate significant improvements in utility judgments, ranking of topical relevance, and answer generation upon representative baselines, including multiple single-shot utility judging approaches. Our code and benchmark can be found at https://anonymous.4open.science/r/ITEM-B486/.
false
false
false
false
true
true
true
false
true
false
false
false
false
false
false
false
false
false
464,825
1812.00602
Examining Deep Learning Architectures for Crime Classification and Prediction
In this paper, a detailed study on crime classification and prediction using deep learning architectures is presented. We examine the effectiveness of deep learning algorithms on this domain and provide recommendations for designing and training deep learning systems for predicting crime areas, using open data from police reports. Having as training data time-series of crime types per location, a comparative study of 10 state-of-the-art methods against 3 different deep learning configurations is conducted. In our experiments with five publicly available datasets, we demonstrate that the deep learning-based methods consistently outperform the existing best-performing methods. Moreover, we evaluate the effectiveness of different parameters in the deep learning architectures and give insights for configuring them in order to achieve improved performance in crime classification and finally crime prediction.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
115,310
2404.05966
THOUGHTSCULPT: Reasoning with Intermediate Revision and Search
We present THOUGHTSCULPT, a general reasoning and search method for tasks with outputs that can be decomposed into components. THOUGHTSCULPT explores a search tree of potential solutions using Monte Carlo Tree Search (MCTS), building solutions one action at a time and evaluating according to any domain-specific heuristic, which in practice is often simply an LLM evaluator. Critically, our action space includes revision actions: THOUGHTSCULPT may choose to revise part of its previous output rather than continuing to build the rest of its output. Empirically, THOUGHTSCULPT outperforms state-of-the-art reasoning methods across three challenging tasks: Story Outline Improvement (up to +30% interestingness), Mini-Crosswords Solving (up to +16% word success rate), and Constrained Generation (up to +10% concept coverage).
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
445,278
2005.11106
On the suitability of generalized regression neural networks for GNSS position time series prediction for geodetic applications in geodesy and geophysics
In this paper, the generalized regression neural network is used to predict the GNSS position time series. Using the IGS 24-hour final solution data for Bad Hamburg permanent GNSS station in Germany, it is shown that the larger the training of the network, the higher the accuracy is, regardless of the time span of the time series. In order to analyze the performance of the neural network in various conditions, 14 permanent stations are used in different countries, namely, Spain, France, Romania, Poland, Russian Federation, United Kingdom, Czech Republic, Sweden, Ukraine, Italy, Finland, Slovak Republic, Cyprus, and Greece. The performance analysis is divided into two parts, continuous data-without gaps-and discontinuous ones-having intervals of gaps with no data available. Three measure of error are presented, namely, symmetric mean absolute percentage error, standard deviation, and mean of absolute errors. It is shown that for discontinuous data the position can be predicted with an accuracy of up to 6 centimeters, while the continuous data positions present a higher prediction accuracy, as high as 3 centimeters. In order to compare the results of this machine learning algorithm with the traditional statistical approaches, the Theta method is used, which is well-established for high-accuracy time series prediction. The comparison shows that the generalized regression neural network machine learning algorithm presents better accuracy than the Theta method, possibly up to 250 times. In addition, it is approximately 4.6 times faster.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
178,388
2402.02552
Neur2BiLO: Neural Bilevel Optimization
Bilevel optimization deals with nested problems in which a leader takes the first decision to minimize their objective function while accounting for a follower's best-response reaction. Constrained bilevel problems with integer variables are particularly notorious for their hardness. While exact solvers have been proposed for mixed-integer linear bilevel optimization, they tend to scale poorly with problem size and are hard to generalize to the non-linear case. On the other hand, problem-specific algorithms (exact and heuristic) are limited in scope. Under a data-driven setting in which similar instances of a bilevel problem are solved routinely, our proposed framework, Neur2BiLO, embeds a neural network approximation of the leader's or follower's value function, trained via supervised regression, into an easy-to-solve mixed-integer program. Neur2BiLO serves as a heuristic that produces high-quality solutions extremely fast for four applications with linear and non-linear objectives and pure and mixed-integer variables.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
426,616
2404.15786
Rethinking Model Prototyping through the MedMNIST+ Dataset Collection
The integration of deep learning based systems in clinical practice is often impeded by challenges rooted in limited and heterogeneous medical datasets. In addition, prioritization of marginal performance improvements on a few, narrowly scoped benchmarks over clinical applicability has slowed down meaningful algorithmic progress. This trend often results in excessive fine-tuning of existing methods to achieve state-of-the-art performance on selected datasets rather than fostering clinically relevant innovations. In response, this work presents a comprehensive benchmark for the MedMNIST+ database to diversify the evaluation landscape and conduct a thorough analysis of common convolutional neural networks (CNNs) and Transformer-based architectures, for medical image classification. Our evaluation encompasses various medical datasets, training methodologies, and input resolutions, aiming to reassess the strengths and limitations of widely used model variants. Our findings suggest that computationally efficient training schemes and modern foundation models hold promise in bridging the gap between expensive end-to-end training and more resource-refined approaches. Additionally, contrary to prevailing assumptions, we observe that higher resolutions may not consistently improve performance beyond a certain threshold, advocating for the use of lower resolutions, particularly in prototyping stages, to expedite processing. Notably, our analysis reaffirms the competitiveness of convolutional models compared to ViT-based architectures emphasizing the importance of comprehending the intrinsic capabilities of different model architectures. Moreover, we hope that our standardized evaluation framework will help enhance transparency, reproducibility, and comparability on the MedMNIST+ dataset collection as well as future research within the field. Code is available at https://github.com/sdoerrich97 .
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
449,239
2203.05842
Multiple Inputs Neural Networks for Medicare fraud Detection
Medicare fraud results in considerable losses for governments and insurance companies and results in higher premiums from clients. Medicare fraud costs around 13 billion euros in Europe and between 21 billion and 71 billion US dollars per year in the United States. This study aims to use artificial neural network based classifiers to predict medicare fraud. The main difficulty using machine learning techniques in fraud detection or more generally anomaly detection is that the data sets are highly imbalanced. To detect medicare frauds, we propose a multiple inputs deep neural network based classifier with a Long-short Term Memory (LSTM) autoencoder component. This architecture makes it possible to take into account many sources of data without mixing them and makes the classification task easier for the final model. The latent features extracted from the LSTM autoencoder have a strong discriminating power and separate the providers into homogeneous clusters. We use the data sets from the Centers for Medicaid and Medicare Services (CMS) of the US federal government. The CMS provides publicly available data that brings together all of the cost price requests sent by American hospitals to medicare companies. Our results show that although baseline artificial neural network give good performances, they are outperformed by our multiple inputs neural networks. We have shown that using a LSTM autoencoder to embed the provider behavior gives better results and makes the classifiers more robust to class imbalance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
284,933
2209.04924
Meta-Reinforcement Learning via Language Instructions
Although deep reinforcement learning has recently been very successful at learning complex behaviors, it requires a tremendous amount of data to learn a task. One of the fundamental reasons causing this limitation lies in the nature of the trial-and-error learning paradigm of reinforcement learning, where the agent communicates with the environment and progresses in the learning only relying on the reward signal. This is implicit and rather insufficient to learn a task well. On the contrary, humans are usually taught new skills via natural language instructions. Utilizing language instructions for robotic motion control to improve the adaptability is a recently emerged topic and challenging. In this paper, we present a meta-RL algorithm that addresses the challenge of learning skills with language instructions in multiple manipulation tasks. On the one hand, our algorithm utilizes the language instructions to shape its interpretation of the task, on the other hand, it still learns to solve task in a trial-and-error process. We evaluate our algorithm on the robotic manipulation benchmark (Meta-World) and it significantly outperforms state-of-the-art methods in terms of training and testing task success rates. Codes are available at \url{https://tumi6robot.wixsite.com/million}.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
316,925
1909.09565
Automatic Table completion using Knowledge Base
Table is a popular data format to organize and present relational information. Users often have to manually compose tables when gathering their desiderate information (e.g., entities and their attributes) for decision making. In this work, we propose to resolve a new type of heterogeneous query viz: tabular query, which contains a natural language query description, column names of the desired table, and an example row. We aim to acquire more entity tuples (rows) and automatically fill the table specified by the tabular query. We design a novel framework AutoTableComplete which aims to integrate schema specific structural information with the natural language contextual information provided by the user, to complete tables automatically, using a heterogeneous knowledge base (KB) as the main information source. Given a tabular query as input, our framework first constructs a set of candidate chains that connect the given example entities in KB. We learn to select the best matching chain from these candidates using the semantic context from tabular query. The selected chain is then converted into a SPARQL query, executed against KB to gather a set of candidate rows, that are then ranked in order of their relevance to the tabular query, to complete the desired table. We construct a new dataset based on tables in Wikipedia pages and Freebase, using which we perform a wide range of experiments to demonstrate the effectiveness of AutoTableComplete as well as present a detailed error analysis of our method.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
146,293
2407.04168
Learning Interpretable Differentiable Logic Networks
The ubiquity of neural networks (NNs) in real-world applications, from healthcare to natural language processing, underscores their immense utility in capturing complex relationships within high-dimensional data. However, NNs come with notable disadvantages, such as their "black-box" nature, which hampers interpretability, as well as their tendency to overfit the training data. We introduce a novel method for learning interpretable differentiable logic networks (DLNs) that are architectures that employ multiple layers of binary logic operators. We train these networks by softening and differentiating their discrete components, e.g., through binarization of inputs, binary logic operations, and connections between neurons. This approach enables the use of gradient-based learning methods. Experimental results on twenty classification tasks indicate that differentiable logic networks can achieve accuracies comparable to or exceeding that of traditional NNs. Equally importantly, these networks offer the advantage of interpretability. Moreover, their relatively simple structure results in the number of logic gate-level operations during inference being up to a thousand times smaller than NNs, making them suitable for deployment on edge devices.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
470,449
1708.00850
Towards Semantic Modeling of Contradictions and Disagreements: A Case Study of Medical Guidelines
We introduce a formal distinction between contradictions and disagreements in natural language texts, motivated by the need to formally reason about contradictory medical guidelines. This is a novel and potentially very useful distinction, and has not been discussed so far in NLP and logic. We also describe a NLP system capable of automated finding contradictory medical guidelines; the system uses a combination of text analysis and information retrieval modules. We also report positive evaluation results on a small corpus of contradictory medical recommendations.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
78,283
2202.07856
The NLP Task Effectiveness of Long-Range Transformers
Transformer models cannot easily scale to long sequences due to their O(N^2) time and space complexity. This has led to Transformer variants seeking to lower computational complexity, such as Longformer and Performer. While such models have theoretically greater efficiency, their effectiveness on real NLP tasks has not been well studied. We benchmark 7 variants of Transformer models on 5 difficult NLP tasks and 7 datasets. We design experiments to isolate the effect of pretraining and hyperparameter settings, to focus on their capacity for long-range attention. Moreover, we present various methods to investigate attention behaviors to illuminate model details beyond metric scores. We find that the modified attention in long-range transformers has advantages on content selection and query-guided decoding, but they come with previously unrecognized drawbacks such as insufficient attention to distant tokens and accumulated approximation error.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
280,684
2502.11380
Exploring the Small World of Word Embeddings: A Comparative Study on Conceptual Spaces from LLMs of Different Scales
A conceptual space represents concepts as nodes and semantic relatedness as edges. Word embeddings, combined with a similarity metric, provide an effective approach to constructing such a space. Typically, embeddings are derived from traditional distributed models or encoder-only pretrained models, whose objectives directly capture the meaning of the current token. In contrast, decoder-only models, including large language models (LLMs), predict the next token, making their embeddings less directly tied to the current token's semantics. Moreover, comparative studies on LLMs of different scales remain underexplored. In this paper, we construct a conceptual space using word embeddings from LLMs of varying scales and comparatively analyze their properties. We establish a network based on a linguistic typology-inspired connectivity hypothesis, examine global statistical properties, and compare LLMs of varying scales. Locally, we analyze conceptual pairs, WordNet relations, and a cross-lingual semantic network for qualitative words. Our results indicate that the constructed space exhibits small-world properties, characterized by a high clustering coefficient and short path lengths. Larger LLMs generate more intricate spaces, with longer paths reflecting richer relational structures and connections. Furthermore, the network serves as an efficient bridge for cross-lingual semantic mapping.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
534,334
2308.15881
Interpretability-guided Data Augmentation for Robust Segmentation in Multi-centre Colonoscopy Data
Multi-centre colonoscopy images from various medical centres exhibit distinct complicating factors and overlays that impact the image content, contingent on the specific acquisition centre. Existing Deep Segmentation networks struggle to achieve adequate generalizability in such data sets, and the currently available data augmentation methods do not effectively address these sources of data variability. As a solution, we introduce an innovative data augmentation approach centred on interpretability saliency maps, aimed at enhancing the generalizability of Deep Learning models within the realm of multi-centre colonoscopy image segmentation. The proposed augmentation technique demonstrates increased robustness across different segmentation models and domains. Thorough testing on a publicly available multi-centre dataset for polyp detection demonstrates the effectiveness and versatility of our approach, which is observed both in quantitative and qualitative results. The code is publicly available at: https://github.com/nki-radiology/interpretability_augmentation
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
388,831
2410.06957
Support Vector Boosting Machine (SVBM): Enhancing Classification Performance with AdaBoost and Residual Connections
In traditional boosting algorithms, the focus on misclassified training samples emphasizes their importance based on difficulty during the learning process. While using a standard Support Vector Machine (SVM) as a weak learner in an AdaBoost framework can enhance model performance by concentrating on error samples, this approach introduces significant challenges. Specifically, SVMs, characterized by their stability and robustness, may require destabilization to fit the boosting paradigm, which in turn can constrain performance due to reliance on the weighted results from preceding iterations. To address these challenges, we propose the Support Vector Boosting Machine (SVBM), which integrates a novel subsampling process with SVM algorithms and residual connection techniques. This method updates sample weights by considering both the current model's predictions and the outputs from prior rounds, allowing for effective sparsity control. The SVBM framework enhances the ability to form complex decision boundaries, thereby improving classification performance. The MATLAB source code for SVBM can be accessed at https://github.com/junbolian/SVBM.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
496,407
2201.04343
An Efficient and Adaptive Granular-ball Generation Method in Classification Problem
Granular-ball computing is an efficient, robust, and scalable learning method for granular computing. The basis of granular-ball computing is the granular-ball generation method. This paper proposes a method for accelerating the granular-ball generation using the division to replace $k$-means. It can greatly improve the efficiency of granular-ball generation while ensuring the accuracy similar to the existing method. Besides, a new adaptive method for the granular-ball generation is proposed by considering granular-ball's overlap eliminating and some other factors. This makes the granular-ball generation process of parameter-free and completely adaptive in the true sense. In addition, this paper first provides the mathematical models for the granular-ball covering. The experimental results on some real data sets demonstrate that the proposed two granular-ball generation methods have similar accuracies with the existing method while adaptiveness or acceleration is realized.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
275,079
1705.04138
Fast Stochastic Variance Reduced ADMM for Stochastic Composition Optimization
We consider the stochastic composition optimization problem proposed in \cite{wang2017stochastic}, which has applications ranging from estimation to statistical and machine learning. We propose the first ADMM-based algorithm named com-SVR-ADMM, and show that com-SVR-ADMM converges linearly for strongly convex and Lipschitz smooth objectives, and has a convergence rate of $O( \log S/S)$, which improves upon the $O(S^{-4/9})$ rate in \cite{wang2016accelerating} when the objective is convex and Lipschitz smooth. Moreover, com-SVR-ADMM possesses a rate of $O(1/\sqrt{S})$ when the objective is convex but without Lipschitz smoothness. We also conduct experiments and show that it outperforms existing algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
73,285
1905.10671
DIANet: Dense-and-Implicit Attention Network
Attention networks have successfully boosted the performance in various vision problems. Previous works lay emphasis on designing a new attention module and individually plug them into the networks. Our paper proposes a novel-and-simple framework that shares an attention module throughout different network layers to encourage the integration of layer-wise information and this parameter-sharing module is referred as Dense-and-Implicit-Attention (DIA) unit. Many choices of modules can be used in the DIA unit. Since Long Short Term Memory (LSTM) has a capacity of capturing long-distance dependency, we focus on the case when the DIA unit is the modified LSTM (refer as DIA-LSTM). Experiments on benchmark datasets show that the DIA-LSTM unit is capable of emphasizing layer-wise feature interrelation and leads to significant improvement of image classification accuracy. We further empirically show that the DIA-LSTM has a strong regularization ability on stabilizing the training of deep networks by the experiments with the removal of skip connections or Batch Normalization in the whole residual network. The code is released at https://github.com/gbup-group/DIANet.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
132,141
2101.02046
TextBox: A Unified, Modularized, and Extensible Framework for Text Generation
In this paper, we release an open-source library, called TextBox, to provide a unified, modularized, and extensible text generation framework. TextBox aims to support a broad set of text generation tasks and models. In our library, we implement 21 text generation models on 9 benchmark datasets, covering the categories of VAE, GAN, and pretrained language models. Meanwhile, our library maintains sufficient modularity and extensibility by properly decomposing the model architecture, inference, and learning process into highly reusable modules, which allows users to easily incorporate new models into our framework. The above features make TextBox specially suitable for researchers and practitioners to quickly reproduce baseline models and develop new models. TextBox is implemented based on PyTorch, and released under Apache License 2.0 at https://github.com/RUCAIBox/TextBox.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
214,514
2501.09081
Inferring Transition Dynamics from Value Functions
In reinforcement learning, the value function is typically trained to solve the Bellman equation, which connects the current value to future values. This temporal dependency hints that the value function may contain implicit information about the environment's transition dynamics. By rearranging the Bellman equation, we show that a converged value function encodes a model of the underlying dynamics of the environment. We build on this insight to propose a simple method for inferring dynamics models directly from the value function, potentially mitigating the need for explicit model learning. Furthermore, we explore the challenges of next-state identifiability, discussing conditions under which the inferred dynamics model is well-defined. Our work provides a theoretical foundation for leveraging value functions in dynamics modeling and opens a new avenue for bridging model-free and model-based reinforcement learning.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
525,007
1611.00675
emgr - The Empirical Gramian Framework
System Gramian matrices are a well-known encoding for properties of input-output systems such as controllability, observability or minimality. These so-called system Gramians were developed in linear system theory for applications such as model order reduction of control systems. Empirical Gramian are an extension to the system Gramians for parametric and nonlinear systems as well as a data-driven method of computation. The empirical Gramian framework - emgr - implements the empirical Gramians in a uniform and configurable manner, with applications such as Gramian-based (nonlinear) model reduction, decentralized control, sensitivity analysis, parameter identification and combined state and parameter reduction.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
63,265
1811.08006
Non-invasive thermal comfort perception based on subtleness magnification and deep learning for energy efficiency
Human thermal comfort measurement plays a critical role in giving feedback signals for building energy efficiency. A non-invasive measuring method based on subtleness magnification and deep learning (NIDL) was designed to achieve a comfortable, energy efficient built environment. The method relies on skin feature data, e.g., subtle motion and texture variation, and a 315-layer deep neural network for constructing the relationship between skin features and skin temperature. A physiological experiment was conducted for collecting feature data (1.44 million) and algorithm validation. The non-invasive measurement algorithm based on a partly-personalized saturation temperature model (NIPST) was used for algorithm performance comparisons. The results show that the mean error and median error of the NIDL are 0.4834 Celsius and 0.3464 Celsius which is equivalent to accuracy improvements of 16.28% and 4.28%, respectively.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
113,917
2404.04221
How Lexical is Bilingual Lexicon Induction?
In contemporary machine learning approaches to bilingual lexicon induction (BLI), a model learns a mapping between the embedding spaces of a language pair. Recently, retrieve-and-rank approach to BLI has achieved state of the art results on the task. However, the problem remains challenging in low-resource settings, due to the paucity of data. The task is complicated by factors such as lexical variation across languages. We argue that the incorporation of additional lexical information into the recent retrieve-and-rank approach should improve lexicon induction. We demonstrate the efficacy of our proposed approach on XLING, improving over the previous state of the art by an average of 2\% across all language pairs.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
444,550
2012.09112
Interoperability and computational framework for simulating open channel hydraulics: application to sensitivity analysis and calibration of Gironde Estuary model
Water resource management is of crucial societal and economic importance, requiring a strong capacity for anticipating environmental change. Progress in physical process knowledge, numerical methods and computational power, allows us to address hydro-environmental problems of growing complexity. Modeling of river and marine flows is no exception. With the increase in IT resources, environmental modeling is evolving to meet the challenges of complex real-world problems. This paper presents a new distributed Application Programming Interface (API) of the open source TELEMAC-MASCARET system to run hydro-environmental simulations with the help of the interoperability concept. Use of the API encourages and facilitates the combination of worldwide reference environmental libraries with the hydro-informatic system. Consequently, the objective of the paper is to promote the interoperability concept for studies dealing with such issues as uncertainty propagation, global sensitivity analysis, optimization, multi-physics or multi-dimensional coupling. To illustrate the capability of the API, an operational problem for improving the navigation capacity of the Gironde Estuary is presented. The API potential is demonstrated in a re-calibration context. The API is used for a multivariate sensitivity analysis to quickly reveal the most influential parameters which can then be optimally calibrated with the help of a data assimilation technique.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
211,965
2411.00691
Leveraging Large Language Models for Code-Mixed Data Augmentation in Sentiment Analysis
Code-mixing (CM), where speakers blend languages within a single expression, is prevalent in multilingual societies but poses challenges for natural language processing due to its complexity and limited data. We propose using a large language model to generate synthetic CM data, which is then used to enhance the performance of task-specific models for CM sentiment analysis. Our results show that in Spanish-English, synthetic data improved the F1 score by 9.32%, outperforming previous augmentation techniques. However, in Malayalam-English, synthetic data only helped when the baseline was low; with strong natural data, additional synthetic data offered little benefit. Human evaluation confirmed that this approach is a simple, cost-effective way to generate natural-sounding CM sentences, particularly beneficial for low baselines. Our findings suggest that few-shot prompting of large language models is a promising method for CM data augmentation and has significant impact on improving sentiment analysis, an important element in the development of social influence systems.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
504,703
2003.03063
Quantum Adiabatic Theorem Revisited
In 2004 Ambainis and Regev formulated a certain form of quantum adiabatic theorem and provided an elementary proof which is especially accessible to computer scientists. Their result is achieved by discretizing the total adiabatic evolution into a sequence of unitary transformations acting on the quantum system. Here we continue this line of study by providing another elementary and shorter proof with improved bounds. Our key finding is a succinct integral representation of the difference between the target and the actual states, which yields an accurate estimation of the approximation error. Our proof can be regarded as a "continuous" version of the work by Ambainis and Regev. As applications, we show how to adiabatically prepare an arbitrary qubit state from an initial state.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
167,116
1404.2229
Towards the Safety of Human-in-the-Loop Robotics: Challenges and Opportunities for Safety Assurance of Robotic Co-Workers
The success of the human-robot co-worker team in a flexible manufacturing environment where robots learn from demonstration heavily relies on the correct and safe operation of the robot. How this can be achieved is a challenge that requires addressing both technical as well as human-centric research questions. In this paper we discuss the state of the art in safety assurance, existing as well as emerging standards in this area, and the need for new approaches to safety assurance in the context of learning machines. We then focus on robotic learning from demonstration, the challenges these techniques pose to safety assurance and indicate opportunities to integrate safety considerations into algorithms "by design". Finally, from a human-centric perspective, we stipulate that, to achieve high levels of safety and ultimately trust, the robotic co-worker must meet the innate expectations of the humans it works with. It is our aim to stimulate a discussion focused on the safety aspects of human-in-the-loop robotics, and to foster multidisciplinary collaboration to address the research challenges identified.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
32,191
math/0510276
An algorithmic and a geometric characterization of Coarsening At Random
We show that the class of conditional distributions satisfying the coarsening at Random (CAR) property for discrete data has a simple and robust algorithmic description based on randomized uniform multicovers: combinatorial objects generalizing the notion of partition of a set. However, the complexity of a given CAR mechanism can be large: the maximal "height" of the needed multicovers can be exponential in the number of points in the sample space. The results stem from a geometric interpretation of the set of CAR distributions as a convex polytope and a characterization of its extreme points. The hierarchy of CAR models defined in this way could be useful in parsimonious statistical modelling of CAR mechanisms, though the results also raise doubts in applied work as to the meaningfulness of the CAR assumption in its full generality.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
540,704
1810.01719
A Puff of Steem: Security Analysis of Decentralized Content Curation
Decentralized content curation is the process through which uploaded posts are ranked and filtered based exclusively on users' feedback. Platforms such as the blockchain-based Steemit employ this type of curation while providing monetary incentives to promote the visibility of high quality posts according to the perception of the participants. Despite the wide adoption of the platform very little is known regarding its performance and resilience characteristics. In this work, we provide a formal model for decentralized content curation that identifies salient complexity and game-theoretic measures of performance and resilience to selfish participants. Armed with our model, we provide a first analysis of Steemit identifying the conditions under which the system can be expected to correctly converge to curation while we demonstrate its susceptibility to selfish participant behaviour. We validate our theoretical results with system simulations in various scenarios.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
109,446
1604.03498
GPU-FV: Realtime Fisher Vector and Its Applications in Video Monitoring
Fisher vector has been widely used in many multimedia retrieval and visual recognition applications with good performance. However, the computation complexity prevents its usage in real-time video monitoring. In this work, we proposed and implemented GPU-FV, a fast Fisher vector extraction method with the help of modern GPUs. The challenge of implementing Fisher vector on GPUs lies in the data dependency in feature extraction and expensive memory access in Fisher vector computing. To handle these challenges, we carefully designed GPU-FV in a way that utilizes the computing power of GPU as much as possible, and applied optimizations such as loop tiling to boost the performance. GPU-FV is about 12 times faster than the CPU version, and 50\% faster than a non-optimized GPU implementation. For standard video input (320*240), GPU-FV can process each frame within 34ms on a model GPU. Our experiments show that GPU-FV obtains a similar recognition accuracy as traditional FV on VOC 2007 and Caltech 256 image sets. We also applied GPU-FV for realtime video monitoring tasks and found that GPU-FV outperforms a number of previous works. Especially, when the number of training examples are small, GPU-FV outperforms the recent popular deep CNN features borrowed from ImageNet. The code can be downloaded from the following link https://bitbucket.org/mawenjing/gpu-fv.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
54,515
2311.17937
Unlocking Spatial Comprehension in Text-to-Image Diffusion Models
We propose CompFuser, an image generation pipeline that enhances spatial comprehension and attribute assignment in text-to-image generative models. Our pipeline enables the interpretation of instructions defining spatial relationships between objects in a scene, such as `An image of a gray cat on the left of an orange dog', and generate corresponding images. This is especially important in order to provide more control to the user. CompFuser overcomes the limitation of existing text-to-image diffusion models by decoding the generation of multiple objects into iterative steps: first generating a single object and then editing the image by placing additional objects in their designated positions. To create training data for spatial comprehension and attribute assignment we introduce a synthetic data generation process, that leverages a frozen large language model and a frozen layout-based diffusion model for object placement. We compare our approach to strong baselines and show that our model outperforms state-of-the-art image generation models in spatial comprehension and attribute assignment, despite being 3x to 5x smaller in parameters.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
411,476
1710.08070
Accelerated Reinforcement Learning
Policy gradient methods are widely used in reinforcement learning algorithms to search for better policies in the parameterized policy space. They do gradient search in the policy space and are known to converge very slowly. Nesterov developed an accelerated gradient search algorithm for convex optimization problems. This has been recently extended for non-convex and also stochastic optimization. We use Nesterov's acceleration for policy gradient search in the well-known actor-critic algorithm and show the convergence using ODE method. We tested this algorithm on a scheduling problem. Here an incoming job is scheduled into one of the four queues based on the queue lengths. We see from experimental results that algorithm using Nesterov's acceleration has significantly better performance compared to algorithm which do not use acceleration. To the best of our knowledge this is the first time Nesterov's acceleration has been used with actor-critic algorithm.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
83,035
2306.02083
Efficient Text-Guided 3D-Aware Portrait Generation with Score Distillation Sampling on Distribution
Text-to-3D is an emerging task that allows users to create 3D content with infinite possibilities. Existing works tackle the problem by optimizing a 3D representation with guidance from pre-trained diffusion models. An apparent drawback is that they need to optimize from scratch for each prompt, which is computationally expensive and often yields poor visual fidelity. In this paper, we propose DreamPortrait, which aims to generate text-guided 3D-aware portraits in a single-forward pass for efficiency. To achieve this, we extend Score Distillation Sampling from datapoint to distribution formulation, which injects semantic prior into a 3D distribution. However, the direct extension will lead to the mode collapse problem since the objective only pursues semantic alignment. Hence, we propose to optimize a distribution with hierarchical condition adapters and GAN loss regularization. For better 3D modeling, we further design a 3D-aware gated cross-attention mechanism to explicitly let the model perceive the correspondence between the text and the 3D-aware space. These elaborated designs enable our model to generate portraits with robust multi-view semantic consistency, eliminating the need for optimization-based methods. Extensive experiments demonstrate our model's highly competitive performance and significant speed boost against existing methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
370,754
2303.00983
Using simulation to quantify the performance of automotive perception systems
The design and evaluation of complex systems can benefit from a software simulation - sometimes called a digital twin. The simulation can be used to characterize system performance or to test its performance under conditions that are difficult to measure (e.g., nighttime for automotive perception systems). We describe the image system simulation software tools that we use to evaluate the performance of image systems for object (automobile) detection. We describe experiments with 13 different cameras with a variety of optics and pixel sizes. To measure the impact of camera spatial resolution, we designed a collection of driving scenes that had cars at many different distances. We quantified system performance by measuring average precision and we report a trend relating system resolution and object detection performance. We also quantified the large performance degradation under nighttime conditions, compared to daytime, for all cameras and a COCO pre-trained network.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
348,787
2304.08639
pgmpy: A Python Toolkit for Bayesian Networks
Bayesian Networks (BNs) are used in various fields for modeling, prediction, and decision making. pgmpy is a python package that provides a collection of algorithms and tools to work with BNs and related models. It implements algorithms for structure learning, parameter estimation, approximate and exact inference, causal inference, and simulations. These implementations focus on modularity and easy extensibility to allow users to quickly modify/add to existing algorithms, or to implement new algorithms for different use cases. pgmpy is released under the MIT License; the source code is available at: https://github.com/pgmpy/pgmpy, and the documentation at: https://pgmpy.org.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
358,773
2502.10412
Identifying relevant indicators for monitoring a National Artificial Intelligence Strategy
How can a National Artificial Intelligence Strategy be effectively monitored? To address this question, we propose a methodology consisting of two key components. First, it involves identifying relevant indicators within national AI strategies. Second, it assesses the alignment between these indicators and the strategic actions of a specific government's AI strategy, allowing for a critical evaluation of its monitoring measures. Moreover, identifying these indicators helps assess the overall quality of the strategy's structure. A lack of alignment between strategic actions and the identified indicators may reveal gaps or blind spots in the strategy. This methodology is demonstrated using the Brazilian AI strategy as a case study.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
533,859
2406.17131
Bayesian temporal biclustering with applications to multi-subject neuroscience studies
We consider the problem of analyzing multivariate time series collected on multiple subjects, with the goal of identifying groups of subjects exhibiting similar trends in their recorded measurements over time as well as time-varying groups of associated measurements. To this end, we propose a Bayesian model for temporal biclustering featuring nested partitions, where a time-invariant partition of subjects induces a time-varying partition of measurements. Our approach allows for data-driven determination of the number of subject and measurement clusters as well as estimation of the number and location of changepoints in measurement partitions. To efficiently perform model fitting and posterior estimation with Markov Chain Monte Carlo, we derive a blocked update of measurements' cluster-assignment sequences. We illustrate the performance of our model in two applications to functional magnetic resonance imaging data and to an electroencephalogram dataset. The results indicate that the proposed model can combine information from potentially many subjects to discover a set of interpretable, dynamic patterns. Experiments on simulated data compare the estimation performance of the proposed model against ground-truth values and other statistical methods, showing that it performs well at identifying ground-truth subject and measurement clusters even when no subject or time dependence is present.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
467,432
1911.05732
Antithetic integral feedback for the robust control of monostable and oscillatory biomolecular circuits
Biomolecular feedback systems are now a central application area of interest within control theory. While classical control techniques provide invaluable insight into the function and design of both natural and synthetic biomolecular systems, there are certain aspects of biological control that have proven difficult to analyze with traditional methods. To this end, we describe here how the recently developed tools of dominance analysis can be used to gain insight into the nonlinear behavior of the antithetic integral feedback circuit, a recently discovered control architecture which implements integral control of arbitrary biomolecular processes using a simple feedback mechanism. We show that dominance theory can predict both monostability and periodic oscillations in the circuit, depending on the corresponding parameters and architecture. We then use the theory to characterize the robustness of the asymptotic behavior of the circuit in a nonlinear setting.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
153,354
1702.07386
Toward Streaming Synapse Detection with Compositional ConvNets
Connectomics is an emerging field in neuroscience that aims to reconstruct the 3-dimensional morphology of neurons from electron microscopy (EM) images. Recent studies have successfully demonstrated the use of convolutional neural networks (ConvNets) for segmenting cell membranes to individuate neurons. However, there has been comparatively little success in high-throughput identification of the intercellular synaptic connections required for deriving connectivity graphs. In this study, we take a compositional approach to segmenting synapses, modeling them explicitly as an intercellular cleft co-located with an asymmetric vesicle density along a cell membrane. Instead of requiring a deep network to learn all natural combinations of this compositionality, we train lighter networks to model the simpler marginal distributions of membranes, clefts and vesicles from just 100 electron microscopy samples. These feature maps are then combined with simple rules-based heuristics derived from prior biological knowledge. Our approach to synapse detection is both more accurate than previous state-of-the-art (7% higher recall and 5% higher F1-score) and yields a 20-fold speed-up compared to the previous fastest implementations. We demonstrate by reconstructing the first complete, directed connectome from the largest available anisotropic microscopy dataset (245 GB) of mouse somatosensory cortex (S1) in just 9.7 hours on a single shared-memory CPU system. We believe that this work marks an important step toward the goal of a microscope-pace streaming connectomics pipeline.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
68,772
2312.11816
A Dual-way Enhanced Framework from Text Matching Point of View for Multimodal Entity Linking
Multimodal Entity Linking (MEL) aims at linking ambiguous mentions with multimodal information to entity in Knowledge Graph (KG) such as Wikipedia, which plays a key role in many applications. However, existing methods suffer from shortcomings, including modality impurity such as noise in raw image and ambiguous textual entity representation, which puts obstacles to MEL. We formulate multimodal entity linking as a neural text matching problem where each multimodal information (text and image) is treated as a query, and the model learns the mapping from each query to the relevant entity from candidate entities. This paper introduces a dual-way enhanced (DWE) framework for MEL: (1) our model refines queries with multimodal data and addresses semantic gaps using cross-modal enhancers between text and image information. Besides, DWE innovatively leverages fine-grained image attributes, including facial characteristic and scene feature, to enhance and refine visual features. (2)By using Wikipedia descriptions, DWE enriches entity semantics and obtains more comprehensive textual representation, which reduces between textual representation and the entities in KG. Extensive experiments on three public benchmarks demonstrate that our method achieves state-of-the-art (SOTA) performance, indicating the superiority of our model. The code is released on https://github.com/season1blue/DWE
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
416,724
2404.17892
Shared learning of powertrain control policies for vehicle fleets
Emerging data-driven approaches, such as deep reinforcement learning (DRL), aim at on-the-field learning of powertrain control policies that optimize fuel economy and other performance metrics. Indeed, they have shown great potential in this regard for individual vehicles on specific routes or drive cycles. However, for fleets of vehicles that must service a distribution of routes, DRL approaches struggle with learning stability issues that result in high variances and challenge their practical deployment. In this paper, we present a novel framework for shared learning among a fleet of vehicles through the use of a distilled group policy as the knowledge sharing mechanism for the policy learning computations at each vehicle. We detail the mathematical formulation that makes this possible. Several scenarios are considered to analyze the functionality, performance, and computational scalability of the framework with fleet size. Comparisons of the cumulative performance of fleets using our proposed shared learning approach with a baseline of individual learning agents and another state-of-the-art approach with a centralized learner show clear advantages to our approach. For example, we find a fleet average asymptotic improvement of 8.5 percent in fuel economy compared to the baseline while also improving on the metrics of acceleration error and shifting frequency for fleets serving a distribution of suburban routes. Furthermore, we include demonstrative results that show how the framework reduces variance within a fleet and also how it helps individual agents adapt better to new routes.
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
450,047
2002.11061
Ground Texture Based Localization Using Compact Binary Descriptors
Ground texture based localization is a promising approach to achieve high-accuracy positioning of vehicles. We present a self-contained method that can be used for global localization as well as for subsequent local localization updates, i.e. it allows a robot to localize without any knowledge of its current whereabouts, but it can also take advantage of a prior pose estimate to reduce computation time significantly. Our method is based on a novel matching strategy, which we call identity matching, that is based on compact binary feature descriptors. Identity matching treats pairs of features as matches only if their descriptors are identical. While other methods for global localization are faster to compute, our method reaches higher localization success rates, and can switch to local localization after the initial localization.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
165,600
2501.04108
TrojanDec: Data-free Detection of Trojan Inputs in Self-supervised Learning
An image encoder pre-trained by self-supervised learning can be used as a general-purpose feature extractor to build downstream classifiers for various downstream tasks. However, many studies showed that an attacker can embed a trojan into an encoder such that multiple downstream classifiers built based on the trojaned encoder simultaneously inherit the trojan behavior. In this work, we propose TrojanDec, the first data-free method to identify and recover a test input embedded with a trigger. Given a (trojaned or clean) encoder and a test input, TrojanDec first predicts whether the test input is trojaned. If not, the test input is processed in a normal way to maintain the utility. Otherwise, the test input will be further restored to remove the trigger. Our extensive evaluation shows that TrojanDec can effectively identify the trojan (if any) from a given test input and recover it under state-of-the-art trojan attacks. We further demonstrate by experiments that our TrojanDec outperforms the state-of-the-art defenses.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
523,099
2002.12589
Deep Learning Enabled Optimization of Downlink Beamforming Under Per-Antenna Power Constraints: Algorithms and Experimental Demonstration
This paper studies fast downlink beamforming algorithms using deep learning in multiuser multiple-input-single-output systems where each transmit antenna at the base station has its own power constraint. We focus on the signal-to-interference-plus-noise ratio (SINR) balancing problem which is quasi-convex but there is no efficient solution available. We first design a fast subgradient algorithm that can achieve near-optimal solution with reduced complexity. We then propose a deep neural network structure to learn the optimal beamforming based on convolutional networks and exploitation of the duality of the original problem. Two strategies of learning various dual variables are investigated with different accuracies, and the corresponding recovery of the original solution is facilitated by the subgradient algorithm. We also develop a generalization method of the proposed algorithms so that they can adapt to the varying number of users and antennas without re-training. We carry out intensive numerical simulations and testbed experiments to evaluate the performance of the proposed algorithms. Results show that the proposed algorithms achieve close to optimal solution in simulations with perfect channel information and outperform the alleged theoretically optimal solution in experiments, illustrating a better performance-complexity tradeoff than existing schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
166,083
2412.03582
Exploring Non-Linear Effects of Built Environment on Travel Using an Integrated Machine Learning and Inferential Modeling Approach: A Three-Wave Repeated Cross-Sectional Study
This study investigates the dynamic relationship between the built environment and travel in Austin, Texas, over a 20-year period. Using three waves of household travel surveys from 1997, 2006, and 2017, the research employs a repeated cross-sectional approach to address the limitations of traditional longitudinal and cross-sectional studies. Methodologically, it integrates machine learning and inferential modeling to uncover non-linear relationships and threshold effects of built environment characteristics on travel. Findings reveal that the built environment serves as a sustainable tool for managing travel in the long term, contributing 50% or more to the total feature importance in predicting individual travel-surpassing the combined effects of personal and household characteristics. Increased transit accessibility, local and regional destination accessibility, population and employment density, and diversity significantly reduce travel, particularly within their identified thresholds, though the magnitude of their influence varies across time periods. These findings highlight the potential of smart growth policies-such as expanding transit accessibility, promoting high-density and mixed-use development, and discouraging single-use development and peripheral sprawl-as effective strategies to reduce car dependency and manage travel demand.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
514,021
2306.04375
Learning via Wasserstein-Based High Probability Generalisation Bounds
Minimising upper bounds on the population risk or the generalisation gap has been widely used in structural risk minimisation (SRM) -- this is in particular at the core of PAC-Bayesian learning. Despite its successes and unfailing surge of interest in recent years, a limitation of the PAC-Bayesian framework is that most bounds involve a Kullback-Leibler (KL) divergence term (or its variations), which might exhibit erratic behavior and fail to capture the underlying geometric structure of the learning problem -- hence restricting its use in practical applications. As a remedy, recent studies have attempted to replace the KL divergence in the PAC-Bayesian bounds with the Wasserstein distance. Even though these bounds alleviated the aforementioned issues to a certain extent, they either hold in expectation, are for bounded losses, or are nontrivial to minimize in an SRM framework. In this work, we contribute to this line of research and prove novel Wasserstein distance-based PAC-Bayesian generalisation bounds for both batch learning with independent and identically distributed (i.i.d.) data, and online learning with potentially non-i.i.d. data. Contrary to previous art, our bounds are stronger in the sense that (i) they hold with high probability, (ii) they apply to unbounded (potentially heavy-tailed) losses, and (iii) they lead to optimizable training objectives that can be used in SRM. As a result we derive novel Wasserstein-based PAC-Bayesian learning algorithms and we illustrate their empirical advantage on a variety of experiments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
371,722
2208.12809
Incrementality Bidding and Attribution
The causal effect of showing an ad to a potential customer versus not, commonly referred to as "incrementality", is the fundamental question of advertising effectiveness. In digital advertising three major puzzle pieces are central to rigorously quantifying advertising incrementality: ad buying/bidding/pricing, attribution, and experimentation. Building on the foundations of machine learning and causal econometrics, we propose a methodology that unifies these three concepts into a computationally viable model of both bidding and attribution which spans the randomization, training, cross validation, scoring, and conversion attribution of advertising's causal effects. Implementation of this approach is likely to secure a significant improvement in the return on investment of advertising.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
314,856
2312.11580
PlaNet-S: Automatic Semantic Segmentation of Placenta
[Purpose] To develop a fully automated semantic placenta segmentation model that integrates the U-Net and SegNeXt architectures through ensemble learning. [Methods] A total of 218 pregnant women with suspected placental anomalies who underwent magnetic resonance imaging (MRI) were enrolled, yielding 1090 annotated images for developing a deep learning model for placental segmentation. The images were standardized and divided into training and test sets. The performance of PlaNet-S, which integrates U-Net and SegNeXt within an ensemble framework, was assessed using Intersection over Union (IoU) and counting connected components (CCC) against the U-Net model. [Results] PlaNet-S had significantly higher IoU (0.73 +/- 0.13) than that of U-Net (0.78 +/- 0.010) (p<0.01). The CCC for PlaNet-S was significantly higher than that for U-Net (p<0.01), matching the ground truth in 86.0\% and 56.7\% of the cases, respectively. [Conclusion]PlaNet-S performed better than the traditional U-Net in placental segmentation tasks. This model addresses the challenges of time-consuming physician-assisted manual segmentation and offers the potential for diverse applications in placental imaging analyses.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
416,656
2211.12977
Linear Programming Hierarchies in Coding Theory: Dual Solutions
The rate vs. distance problem is a long-standing open problem in coding theory. Recent papers have suggested a new way to tackle this problem by appealing to a new hierarchy of linear programs. If one can find good dual solutions to these LPs, this would result in improved upper bounds for the rate vs. distance problem of linear codes. In this work, we develop the first dual feasible solutions to the LPs in this hierarchy. These match the best-known bound for a wide range of parameters. Our hope is that this is a first step towards better solutions, and improved upper bounds for the rate vs. distance problem of linear codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
332,306
1707.05929
Learning Unified Embedding for Apparel Recognition
In apparel recognition, specialized models (e.g. models trained for a particular vertical like dresses) can significantly outperform general models (i.e. models that cover a wide range of verticals). Therefore, deep neural network models are often trained separately for different verticals. However, using specialized models for different verticals is not scalable and expensive to deploy. This paper addresses the problem of learning one unified embedding model for multiple object verticals (e.g. all apparel classes) without sacrificing accuracy. The problem is tackled from two aspects: training data and training difficulty. On the training data aspect, we figure out that for a single model trained with triplet loss, there is an accuracy sweet spot in terms of how many verticals are trained together. To ease the training difficulty, a novel learning scheme is proposed by using the output from specialized models as learning targets so that L2 loss can be used instead of triplet loss. This new loss makes the training easier and make it possible for more efficient use of the feature space. The end result is a unified model which can achieve the same retrieval accuracy as a number of separate specialized models, while having the model complexity as one. The effectiveness of our approach is shown in experiments.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
77,318
2108.02104
Point Discriminative Learning for Data-efficient 3D Point Cloud Analysis
3D point cloud analysis has drawn a lot of research attention due to its wide applications. However, collecting massive labelled 3D point cloud data is both time-consuming and labor-intensive. This calls for data-efficient learning methods. In this work we propose PointDisc, a point discriminative learning method to leverage self-supervisions for data-efficient 3D point cloud classification and segmentation. PointDisc imposes a novel point discrimination loss on the middle and global level features produced by the backbone network. This point discrimination loss enforces learned features to be consistent with points belonging to the corresponding local shape region and inconsistent with randomly sampled noisy points. We conduct extensive experiments on 3D object classification, 3D semantic and part segmentation, showing the benefits of PointDisc for data-efficient learning. Detailed analysis demonstrate that PointDisc learns unsupervised features that well capture local and global geometry.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
249,222
1903.05749
Inferring 3D Shapes of Unknown Rigid Objects in Clutter through Inverse Physics Reasoning
We present a probabilistic approach for building, on the fly, 3-D models of unknown objects while being manipulated by a robot. We specifically consider manipulation tasks in piles of clutter that contain previously unseen objects. Most manipulation algorithms for performing such tasks require known geometric models of the objects in order to grasp or rearrange them robustly. One of the novel aspects of this work is the utilization of a physics engine for verifying hypothesized geometries in simulation. The evidence provided by physics simulations is used in a probabilistic framework that accounts for the fact that mechanical properties of the objects are uncertain. We present an efficient algorithm for inferring occluded parts of objects based on their observed motions and mutual interactions. Experiments using a robot show that this approach is efficient for constructing physically realistic 3-D models, which can be useful for manipulation planning. Experiments also show that the proposed approach significantly outperforms alternative approaches in terms of shape accuracy.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
124,221
2308.10606
Analyzing Complex Systems with Cascades Using Continuous-Time Bayesian Networks
Interacting systems of events may exhibit cascading behavior where events tend to be temporally clustered. While the cascades themselves may be obvious from the data, it is important to understand which states of the system trigger them. For this purpose, we propose a modeling framework based on continuous-time Bayesian networks (CTBNs) to analyze cascading behavior in complex systems. This framework allows us to describe how events propagate through the system and to identify likely sentry states, that is, system states that may lead to imminent cascading behavior. Moreover, CTBNs have a simple graphical representation and provide interpretable outputs, both of which are important when communicating with domain experts. We also develop new methods for knowledge extraction from CTBNs and we apply the proposed methodology to a data set of alarms in a large industrial system.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
386,804
2412.12538
A Scalable Approach to Benchmarking the In-Conversation Differential Diagnostic Accuracy of a Health AI
Diagnostic errors in healthcare persist as a critical challenge, with increasing numbers of patients turning to online resources for health information. While AI-powered healthcare chatbots show promise, there exists no standardized and scalable framework for evaluating their diagnostic capabilities. This study introduces a scalable benchmarking methodology for assessing health AI systems and demonstrates its application through August, an AI-driven conversational chatbot. Our methodology employs 400 validated clinical vignettes across 14 medical specialties, using AI-powered patient actors to simulate realistic clinical interactions. In systematic testing, August achieved a top-one diagnostic accuracy of 81.8% (327/400 cases) and a top-two accuracy of 85.0% (340/400 cases), significantly outperforming traditional symptom checkers. The system demonstrated 95.8% accuracy in specialist referrals and required 47% fewer questions compared to conventional symptom checkers (mean 16 vs 29 questions), while maintaining empathetic dialogue throughout consultations. These findings demonstrate the potential of AI chatbots to enhance healthcare delivery, though implementation challenges remain regarding real-world validation and integration of objective clinical data. This research provides a reproducible framework for evaluating healthcare AI systems, contributing to the responsible development and deployment of AI in clinical settings.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
517,912
2212.01076
Are Straight-Through gradients and Soft-Thresholding all you need for Sparse Training?
Turning the weights to zero when training a neural network helps in reducing the computational complexity at inference. To progressively increase the sparsity ratio in the network without causing sharp weight discontinuities during training, our work combines soft-thresholding and straight-through gradient estimation to update the raw, i.e. non-thresholded, version of zeroed weights. Our method, named ST-3 for straight-through/soft-thresholding/sparse-training, obtains SoA results, both in terms of accuracy/sparsity and accuracy/FLOPS trade-offs, when progressively increasing the sparsity ratio in a single training cycle. In particular, despite its simplicity, ST-3 favorably compares to the most recent methods, adopting differentiable formulations or bio-inspired neuroregeneration principles. This suggests that the key ingredients for effective sparsification primarily lie in the ability to give the weights the freedom to evolve smoothly across the zero state while progressively increasing the sparsity ratio. Source code and weights available at https://github.com/vanderschuea/stthree
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
334,302
2410.13566
360U-Former: HDR Illumination Estimation with Panoramic Adapted Vision Transformers
Recent illumination estimation methods have focused on enhancing the resolution and improving the quality and diversity of the generated textures. However, few have explored tailoring the neural network architecture to the Equirectangular Panorama (ERP) format utilised in image-based lighting. Consequently, high dynamic range images (HDRI) results usually exhibit a seam at the side borders and textures or objects that are warped at the poles. To address this shortcoming we propose a novel architecture, 360U-Former, based on a U-Net style Vision-Transformer which leverages the work of PanoSWIN, an adapted shifted window attention tailored to the ERP format. To the best of our knowledge, this is the first purely Vision-Transformer model used in the field of illumination estimation. We train 360U-Former as a GAN to generate HDRI from a limited field of view low dynamic range image (LDRI). We evaluate our method using current illumination estimation evaluation protocols and datasets, demonstrating that our approach outperforms existing and state-of-the-art methods without the artefacts typically associated with the use of the ERP format.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
499,584
2202.08772
A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models
With the increasing of model capacity brought by pre-trained language models, there emerges boosting needs for more knowledgeable natural language processing (NLP) models with advanced functionalities including providing and making flexible use of encyclopedic and commonsense knowledge. The mere pre-trained language models, however, lack the capacity of handling such knowledge-intensive NLP tasks alone. To address this challenge, large numbers of pre-trained language models augmented with external knowledge sources are proposed and in rapid development. In this paper, we aim to summarize the current progress of pre-trained language model-based knowledge-enhanced models (PLMKEs) by dissecting their three vital elements: knowledge sources, knowledge-intensive NLP tasks, and knowledge fusion methods. Finally, we present the challenges of PLMKEs based on the discussion regarding the three elements and attempt to provide NLP practitioners with potential directions for further research.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
280,980
2412.13533
Language-guided Medical Image Segmentation with Target-informed Multi-level Contrastive Alignments
Medical image segmentation is crucial in modern medical image analysis, which can aid into diagnosis of various disease conditions. Recently, language-guided segmentation methods have shown promising results in automating image segmentation where text reports are incorporated as guidance. These text reports, containing image impressions and insights given by clinicians, provides auxiliary guidance. However, these methods neglect the inherent pattern gaps between the two distinct modalities, which leads to sub-optimal image-text feature fusion without proper cross-modality feature alignments. Contrastive alignments are widely used to associate image-text semantics in representation learning; however, it has not been exploited to bridge the pattern gaps in language-guided segmentation that relies on subtle low level image details to represent diseases. Existing contrastive alignment methods typically algin high-level global image semantics without involving low-level, localized target information, and therefore fails to explore fine-grained text guidance for language-guided segmentation. In this study, we propose a language-guided segmentation network with Target-informed Multi-level Contrastive Alignments (TMCA). TMCA enables target-informed cross-modality alignments and fine-grained text guidance to bridge the pattern gaps in language-guided segmentation. Specifically, we introduce: 1) a target-sensitive semantic distance module that enables granular image-text alignment modelling, and 2) a multi-level alignment strategy that directs text guidance on low-level image features. In addition, a language-guided target enhancement module is proposed to leverage the aligned text to redirect attention to focus on critical localized image features. Extensive experiments on 4 image-text datasets, involving 3 medical imaging modalities, demonstrated that our TMCA achieved superior performances.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
518,327
1807.02754
Model-Free Optimization Using Eagle Perching Optimizer
The paper proposes a novel nature-inspired technique of optimization. It mimics the perching nature of eagles and uses mathematical formulations to introduce a new addition to metaheuristic algorithms. The nature of the proposed algorithm is based on exploration and exploitation. The proposed algorithm is developed into two versions with some modifications. In the first phase, it undergoes a rigorous analysis to find out their performance. In the second phase it is benchmarked using ten functions of two categories; uni-modal functions and multi-modal functions. In the third phase, we conducted a detailed analysis of the algorithm by exploiting its controlling units or variables. In the fourth and last phase, we consider real world optimization problems with constraints. Both versions of the algorithm show an appreciable performance, but analysis puts more weight to the modified version. The competitive analysis shows that the proposed algorithm outperforms the other tested metaheuristic algorithms. The proposed method has better robustness and computational efficiency.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
102,342