id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2102.04594 | Rationally Inattentive Utility Maximization for Interpretable Deep Image
Classification | Are deep convolutional neural networks (CNNs) for image classification explainable by utility maximization with information acquisition costs? We demonstrate that deep CNNs behave equivalently (in terms of necessary and sufficient conditions) to rationally inattentive utility maximizers, a generative model used extensively in economics for human decision making. Our claim is based by extensive experiments on 200 deep CNNs from 5 popular architectures. The parameters of our interpretable model are computed efficiently via convex feasibility algorithms. As an application, we show that our economics-based interpretable model can predict the classification performance of deep CNNs trained with arbitrary parameters with accuracy exceeding 94% . This eliminates the need to re-train the deep CNNs for image classification. The theoretical foundation of our approach lies in Bayesian revealed preference studied in micro-economics. All our results are on GitHub and completely reproducible. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 219,162 |
2210.11279 | DialogUSR: Complex Dialogue Utterance Splitting and Reformulation for
Multiple Intent Detection | While interacting with chatbots, users may elicit multiple intents in a single dialogue utterance. Instead of training a dedicated multi-intent detection model, we propose DialogUSR, a dialogue utterance splitting and reformulation task that first splits multi-intent user query into several single-intent sub-queries and then recovers all the coreferred and omitted information in the sub-queries. DialogUSR can serve as a plug-in and domain-agnostic module that empowers the multi-intent detection for the deployed chatbots with minimal efforts. We collect a high-quality naturally occurring dataset that covers 23 domains with a multi-step crowd-souring procedure. To benchmark the proposed dataset, we propose multiple action-based generative models that involve end-to-end and two-stage training, and conduct in-depth analyses on the pros and cons of the proposed baselines. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 325,259 |
2008.05850 | Revealing the Hidden Patterns: A Comparative Study on Profiling
Subpopulations of MOOC Students | Massive Open Online Courses (MOOCs) exhibit a remarkable heterogeneity of students. The advent of complex "big data" from MOOC platforms is a challenging yet rewarding opportunity to deeply understand how students are engaged in MOOCs. Past research, looking mainly into overall behavior, may have missed patterns related to student diversity. Using a large dataset from a MOOC offered by FutureLearn, we delve into a new way of investigating hidden patterns through both machine learning and statistical modelling. In this paper, we report on clustering analysis of student activities and comparative analysis on both behavioral patterns and demographical patterns between student subpopulations in the MOOC. Our approach allows for a deeper understanding of how MOOC students behave and achieve. Our findings may be used to design adaptive strategies towards an enhanced MOOC experience | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 191,634 |
2404.03197 | A Rolling Horizon Restoration Framework for Post-disaster Restoration of
Electrical Distribution Networks | Severe weather events such as floods, hurricanes, earthquakes, and large wind or ice storms can cause extensive damage to electrical distribution networks, requiring a multi-day restoration effort. Complicating the recovery process is the lack of complete and accurate information regarding the extent and locations of damages, at least during the initial part of the recovery process. These factors make workforce planning challenging. In this paper, we adopt a rolling horizon restoration framework whereby repairs are planned for adjustable finite length restoration windows. Considering both repair times as well as travel times, we show that the optimal scheduling problem with multiple crews, each with their own time budget, can be recast in terms of a cost constrained reward maximizing mTSP (traveling salesman problem) on doubly weighted graphs, where the objective is to maximize the aggregate reward earned during the upcoming restoration window, provided no crew violates its time budget and certain electrical continuity constraints are met. We propose a mixed integer linear programming (MILP) model for solving the above problem which is validated on standard IEEE PES test feeder networks. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 444,161 |
1609.04779 | Characterizing the Language of Online Communities and its Relation to
Community Reception | This work investigates style and topic aspects of language in online communities: looking at both utility as an identifier of the community and correlation with community reception of content. Style is characterized using a hybrid word and part-of-speech tag n-gram language model, while topic is represented using Latent Dirichlet Allocation. Experiments with several Reddit forums show that style is a better indicator of community identity than topic, even for communities organized around specific topics. Further, there is a positive correlation between the community reception to a contribution and the style similarity to that community, but not so for topic similarity. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 61,032 |
2011.04100 | Network Optimization via Smooth Exact Penalty Functions Enabled by
Distributed Gradient Computation | This paper proposes a distributed algorithm for a network of agents to solve an optimization problem with separable objective function and locally coupled constraints. Our strategy is based on reformulating the original constrained problem as the unconstrained optimization of a smooth (continuously differentiable) exact penalty function. Computing the gradient of this penalty function in a distributed way is challenging even under the separability assumptions on the original optimization problem. Our technical approach shows that the distributed computation problem for the gradient can be formulated as a system of linear algebraic equations defined by separable problem data. To solve it, we design an exponentially fast, input-to-state stable distributed algorithm that does not require the individual agent matrices to be invertible. We employ this strategy to compute the gradient of the penalty function at the current network state. Our distributed algorithmic solver for the original constrained optimization problem interconnects this estimation with the prescription of having the agents follow the resulting direction. Numerical simulations illustrate the convergence and robustness properties of the proposed algorithm. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 205,459 |
1905.08093 | The configuration model for Barabasi-Albert networks | We develop and test a rewiring method (originally proposed by Newman) which allows to build random networks having pre-assigned degree distribution and two-point correlations. For the case of scale-free degree distributions, we discretize the tail of the distribution according to the general prescription by Dorogovtsev and Mendes. The application of this method to Barabasi-Albert (BA) networks is possible thanks to recent analytical results on their correlations, and allows to compare the ensemble of random networks generated in the configuration model with that of "real" networks obtained from preferential attachment. For $\beta\ge 2$ ($\beta$ is the number of parent nodes in the preferential attachment scheme) the networks obtained with the configuration model are completely connected (giant component equal to 100%). In both generation schemes a clear disassortativity of the small degree nodes is demonstrated from the computation of the function $k_{nn}$. We also develop an efficient rewiring method which produces tunable variations of the assortativity coefficient $r$, and we use it to obtain maximally disassortative networks having the same degree distribution of BA networks with given $\beta$. Possible applications of this method concern assortative social networks. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 131,401 |
2409.14488 | Enhancing LLM-based Autonomous Driving Agents to Mitigate Perception
Attacks | There is a growing interest in integrating Large Language Models (LLMs) with autonomous driving (AD) systems. However, AD systems are vulnerable to attacks against their object detection and tracking (ODT) functions. Unfortunately, our evaluation of four recent LLM agents against ODT attacks shows that the attacks are 63.26% successful in causing them to crash or violate traffic rules due to (1) misleading memory modules that provide past experiences for decision making, (2) limitations of prompts in identifying inconsistencies, and (3) reliance on ground truth perception data. In this paper, we introduce Hudson, a driving reasoning agent that extends prior LLM-based driving systems to enable safer decision making during perception attacks while maintaining effectiveness under benign conditions. Hudson achieves this by first instrumenting the AD software to collect real-time perception results and contextual information from the driving scene. This data is then formalized into a domain-specific language (DSL). To guide the LLM in detecting and making safe control decisions during ODT attacks, Hudson translates the DSL into natural language, along with a list of custom attack detection instructions. Following query execution, Hudson analyzes the LLM's control decision to understand its causal reasoning process. We evaluate the effectiveness of Hudson using a proprietary LLM (GPT-4) and two open-source LLMs (Llama and Gemma) in various adversarial driving scenarios. GPT-4, Llama, and Gemma achieve, on average, an attack detection accuracy of 83. 3%, 63. 6%, and 73. 6%. Consequently, they make safe control decisions in 86.4%, 73.9%, and 80% of the attacks. Our results, following the growing interest in integrating LLMs into AD systems, highlight the strengths of LLMs and their potential to detect and mitigate ODT attacks. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 490,488 |
0910.4686 | Moderate Deviations of the Random Riccati Equation | We characterize the invariant filtering measures resulting from Kalman filtering with intermittent observations (\cite{Bruno}), where the observation arrival is modeled as a Bernoulli process. In \cite{Riccati-weakconv}, it was shown that there exists a $\overline{\gamma}^{\{\scriptsize{sb}}}>0$ such that for every observation packet arrival probability $\overline{\gamma}$, $\overline{\gamma}>\overline{\gamma}^{\{\scriptsize{sb}}}>0$, the sequence of random conditional error covariance matrices converges in distribution to a unique invariant distribution $\mathbb{\mu}^{\overline{\gamma}}$ (independent of the filter initialization.) In this paper, we prove that, for controllable and observable systems, $\overline{\gamma}^{\{\scriptsize{sb}}}=0$ and that, as $\overline{\gamma}\uparrow 1$, the family $\{\mathbb{\mu}^{\overline{\gamma}}\}_{\overline{\gamma}>0}$ of invariant distributions satisfies a moderate deviations principle (MDP) with a good rate function $I$. The rate function $I$ is explicitly identified. In particular, our results show: | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 4,791 |
cs/0310050 | Feedforward Neural Networks with Diffused Nonlinear Weight Functions | In this paper, feedforward neural networks are presented that have nonlinear weight functions based on look--up tables, that are specially smoothed in a regularization called the diffusion. The idea of such a type of networks is based on the hypothesis that the greater number of adaptive parameters per a weight function might reduce the total number of the weight functions needed to solve a given problem. Then, if the computational complexity of a propagation through a single such a weight function would be kept low, then the introduced neural networks might possibly be relatively fast. A number of tests is performed, showing that the presented neural networks may indeed perform better in some cases than the classic neural networks and a number of other learning machines. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 538,016 |
1802.03567 | Crit\`eres de qualit\'e d'un classifieur g\'en\'eraliste | This paper considers the problem of choosing a good classifier. For each problem there exist an optimal classifier, but none are optimal, regarding the error rate, in all cases. Because there exists a large number of classifiers, a user would rather prefer an all-purpose classifier that is easy to adjust, in the hope that it will do almost as good as the optimal. In this paper we establish a list of criteria that a good generalist classifier should satisfy . We first discuss data analytic, these criteria are presented. Six among the most popular classifiers are selected and scored according to these criteria. Tables allow to easily appreciate the relative values of each. In the end, random forests turn out to be the best classifiers. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 90,005 |
2110.11395 | SOSP: Efficiently Capturing Global Correlations by Second-Order
Structured Pruning | Pruning neural networks reduces inference time and memory costs. On standard hardware, these benefits will be especially prominent if coarse-grained structures, like feature maps, are pruned. We devise two novel saliency-based methods for second-order structured pruning (SOSP) which include correlations among all structures and layers. Our main method SOSP-H employs an innovative second-order approximation, which enables saliency evaluations by fast Hessian-vector products. SOSP-H thereby scales like a first-order method despite taking into account the full Hessian. We validate SOSP-H by comparing it to our second method SOSP-I that uses a well-established Hessian approximation, and to numerous state-of-the-art methods. While SOSP-H performs on par or better in terms of accuracy, it has clear advantages in terms of scalability and efficiency. This allowed us to scale SOSP-H to large-scale vision tasks, even though it captures correlations across all layers of the network. To underscore the global nature of our pruning methods, we evaluate their performance not only by removing structures from a pretrained network, but also by detecting architectural bottlenecks. We show that our algorithms allow to systematically reveal architectural bottlenecks, which we then remove to further increase the accuracy of the networks. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 262,463 |
1105.5849 | Diffusion in Networks With Overlapping Community Structure | In this work we study diffusion in networks with community structure. We first replicate and extend work on networks with non-overlapping community structure. We then study diffusion on network models that have overlapping community structure. We study contagions in the standard SIR model, and complex contagions thought to be better models of some social diffusion processes. Finally, we investigate diffusion on empirical networks with known overlapping community structure, by analysing the structure of such networks, and by simulating contagion on them. We find that simple and complex contagions can spread fast in networks with overlapping community structure. We also find that short paths exist through overlapping community structure on empirical networks. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 10,567 |
1811.07170 | Optical Flow Dataset and Benchmark for Visual Crowd Analysis | The performance of optical flow algorithms greatly depends on the specifics of the content and the application for which it is used. Existing and well established optical flow datasets are limited to rather particular contents from which none is close to crowd behavior analysis; whereas such applications heavily utilize optical flow. We introduce a new optical flow dataset exploiting the possibilities of a recent video engine to generate sequences with ground-truth optical flow for large crowds in different scenarios. We break with the development of the last decade of introducing ever increasing displacements to pose new difficulties. Instead we focus on real-world surveillance scenarios where numerous small, partly independent, non rigidly moving objects observed over a long temporal range pose a challenge. By evaluating different optical flow algorithms, we find that results of established datasets can not be transferred to these new challenges. In exhaustive experiments we are able to provide new insight into optical flow for crowd analysis. Finally, the results have been validated on the real-world UCF crowd tracking benchmark while achieving competitive results compared to more sophisticated state-of-the-art crowd tracking approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 113,693 |
2502.02170 | Graph Neural Networks for O-RAN Mobility Management: A Link Prediction
Approach | Mobility performance has been a key focus in cellular networks up to 5G. To enhance handover (HO) performance, 3GPP introduced Conditional Handover (CHO) and Layer 1/Layer 2 Triggered Mobility (LTM) mechanisms in 5G. While these reactive HO strategies address the trade-off between HO failures (HOF) and ping-pong effects, they often result in inefficient radio resource utilization due to additional HO preparations. To overcome these challenges, this article proposes a proactive HO framework for mobility management in O-RAN, leveraging user-cell link predictions to identify the optimal target cell for HO. We explore various categories of Graph Neural Networks (GNNs) for link prediction and analyze the complexity of applying them to the mobility management domain. Two GNN models are compared using a real-world dataset, with experimental results demonstrating their ability to capture the dynamic and graph-structured nature of cellular networks. Finally, we present key insights from our study and outline future steps to enable the integration of GNN-based link prediction for mobility management in 6G networks. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 530,201 |
2006.00917 | Evaluation of the general applicability of Dragoon for the k-center
problem | The k-center problem is a fundamental problem we often face when considering complex service systems. Typical challenges include the placement of warehouses in logistics or positioning of servers for content delivery networks. We previously have proposed Dragoon as an effective algorithm to approach the k-center problem. This paper evaluates Dragoon with a focus on potential worst case behavior in comparison to other techniques. We use an evolutionary algorithm to generate instances of the k-center problem that are especially challenging for Dragoon. Ultimately, our experiments confirm the previous good results of Dragoon, however, we also can reliably find scenarios where it is clearly outperformed by other approaches. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | true | false | true | 179,614 |
1709.09220 | Dataset Construction via Attention for Aspect Term Extraction with
Distant Supervision | Aspect Term Extraction (ATE) detects opinionated aspect terms in sentences or text spans, with the end goal of performing aspect-based sentiment analysis. The small amount of available datasets for supervised ATE and the fact that they cover only a few domains raise the need for exploiting other data sources in new and creative ways. Publicly available review corpora contain a plethora of opinionated aspect terms and cover a larger domain spectrum. In this paper, we first propose a method for using such review corpora for creating a new dataset for ATE. Our method relies on an attention mechanism to select sentences that have a high likelihood of containing actual opinionated aspects. We thus improve the quality of the extracted aspects. We then use the constructed dataset to train a model and perform ATE with distant supervision. By evaluating on human annotated datasets, we prove that our method achieves a significantly improved performance over various unsupervised and supervised baselines. Finally, we prove that sentence selection matters when it comes to creating new datasets for ATE. Specifically, we show that, using a set of selected sentences leads to higher ATE performance compared to using the whole sentence set. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 81,588 |
2407.06525 | UnmixingSR: Material-aware Network with Unsupervised Unmixing as
Auxiliary Task for Hyperspectral Image Super-resolution | Deep learning-based (DL-based) hyperspectral image (HIS) super-resolution (SR) methods have achieved remarkable performance and attracted attention in industry and academia. Nonetheless, most current methods explored and learned the mapping relationship between low-resolution (LR) and high-resolution (HR) HSIs, leading to the side effect of increasing unreliability and irrationality in solving the ill-posed SR problem. We find, quite interestingly, LR imaging is similar to the mixed pixel phenomenon. A single photodetector in sensor arrays receives the reflectance signals reflected by a number of classes, resulting in low spatial resolution and mixed pixel problems. Inspired by this observation, this paper proposes a component-aware HSI SR network called UnmixingSR, in which the unsupervised HU as an auxiliary task is used to perceive the material components of HSIs. We regard HU as an auxiliary task and incorporate it into the HSI SR process by exploring the constraints between LR and HR abundances. Instead of only learning the mapping relationship between LR and HR HSIs, we leverage the bond between LR abundances and HR abundances to boost the stability of our method in solving SR problems. Moreover, the proposed unmixing process can be embedded into existing deep SR models as a plug-in-play auxiliary task. Experimental results on hyperspectral experiments show that unmixing process as an auxiliary task incorporated into the SR problem is feasible and rational, achieving outstanding performance. The code is available at | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 471,426 |
2502.05240 | Survey on AI-Generated Media Detection: From Non-MLLM to MLLM | The proliferation of AI-generated media poses significant challenges to information authenticity and social trust, making reliable detection methods highly demanded. Methods for detecting AI-generated media have evolved rapidly, paralleling the advancement of Multimodal Large Language Models (MLLMs). Current detection approaches can be categorized into two main groups: Non-MLLM-based and MLLM-based methods. The former employs high-precision, domain-specific detectors powered by deep learning techniques, while the latter utilizes general-purpose detectors based on MLLMs that integrate authenticity verification, explainability, and localization capabilities. Despite significant progress in this field, there remains a gap in literature regarding a comprehensive survey that examines the transition from domain-specific to general-purpose detection methods. This paper addresses this gap by providing a systematic review of both approaches, analyzing them from single-modal and multi-modal perspectives. We present a detailed comparative analysis of these categories, examining their methodological similarities and differences. Through this analysis, we explore potential hybrid approaches and identify key challenges in forgery detection, providing direction for future research. Additionally, as MLLMs become increasingly prevalent in detection tasks, ethical and security considerations have emerged as critical global concerns. We examine the regulatory landscape surrounding Generative AI (GenAI) across various jurisdictions, offering valuable insights for researchers and practitioners in this field. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 531,520 |
1604.04421 | Stabilizing Transmission Intervals for Nonlinear Delayed Networked
Control Systems [Extended Version] | In this article, we consider a nonlinear process with delayed dynamics to be controlled over a communication network in the presence of disturbances and study robustness of the resulting closed-loop system with respect to network-induced phenomena such as sampled, distorted, delayed and lossy data as well as scheduling protocols. For given plant-controller dynamics and communication network properties (e.g., propagation delays and scheduling protocols), we quantify the control performance level (in terms of Lp-gains) as the transmission interval varies. Maximally Allowable Transfer Interval (MATI) labels the greatest transmission interval for which a prescribed Lp-gain is attained. The proposed methodology combines impulsive delayed system modeling with Lyapunov-Razumikhin techniques to allow for MATIs that are smaller than the communication delays. Other salient features of our methodology are the consideration of variable delays, corrupted data and employment of model-based estimators to prolong MATIs. The present stability results are provided for the class of Uniformly Globally Exponentially Stable (UGES) scheduling protocols. The well-known Round Robin (RR) and Try-Once-Discard (TOD) protocols are examples of UGES protocols. Finally, two numerical examples are provided to demonstrate the benefits of the proposed approach. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 54,645 |
2405.17813 | The Impacts of Data, Ordering, and Intrinsic Dimensionality on Recall in
Hierarchical Navigable Small Worlds | Vector search systems, pivotal in AI applications, often rely on the Hierarchical Navigable Small Worlds (HNSW) algorithm. However, the behaviour of HNSW under real-world scenarios using vectors generated with deep learning models remains under-explored. Existing Approximate Nearest Neighbours (ANN) benchmarks and research typically has an over-reliance on simplistic datasets like MNIST or SIFT1M and fail to reflect the complexity of current use-cases. Our investigation focuses on HNSW's efficacy across a spectrum of datasets, including synthetic vectors tailored to mimic specific intrinsic dimensionalities, widely-used retrieval benchmarks with popular embedding models, and proprietary e-commerce image data with CLIP models. We survey the most popular HNSW vector databases and collate their default parameters to provide a realistic fixed parameterisation for the duration of the paper. We discover that the recall of approximate HNSW search, in comparison to exact K Nearest Neighbours (KNN) search, is linked to the vector space's intrinsic dimensionality and significantly influenced by the data insertion sequence. Our methodology highlights how insertion order, informed by measurable properties such as the pointwise Local Intrinsic Dimensionality (LID) or known categories, can shift recall by up to 12 percentage points. We also observe that running popular benchmark datasets with HNSW instead of KNN can shift rankings by up to three positions for some models. This work underscores the need for more nuanced benchmarks and design considerations in developing robust vector search systems using approximate vector search algorithms. This study presents a number of scenarios with varying real world applicability which aim to better increase understanding and future development of ANN algorithms and embedding | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 458,128 |
2409.10995 | SynthSOD: Developing an Heterogeneous Dataset for Orchestra Music Source
Separation | Recent advancements in music source separation have significantly progressed, particularly in isolating vocals, drums, and bass elements from mixed tracks. These developments owe much to the creation and use of large-scale, multitrack datasets dedicated to these specific components. However, the challenge of extracting similarly sounding sources from orchestra recordings has not been extensively explored, largely due to a scarcity of comprehensive and clean (i.e bleed-free) multitrack datasets. In this paper, we introduce a novel multitrack dataset called SynthSOD, developed using a set of simulation techniques to create a realistic (i.e. using high-quality soundfonts), musically motivated, and heterogeneous training set comprising different dynamics, natural tempo changes, styles, and conditions. Moreover, we demonstrate the application of a widely used baseline music separation model trained on our synthesized dataset w.r.t to the well-known EnsembleSet, and evaluate its performance under both synthetic and real-world conditions. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 488,968 |
2404.04281 | Similar Data Points Identification with LLM: A Human-in-the-loop
Strategy Using Summarization and Hidden State Insights | This study introduces a simple yet effective method for identifying similar data points across non-free text domains, such as tabular and image data, using Large Language Models (LLMs). Our two-step approach involves data point summarization and hidden state extraction. Initially, data is condensed via summarization using an LLM, reducing complexity and highlighting essential information in sentences. Subsequently, the summarization sentences are fed through another LLM to extract hidden states, serving as compact, feature-rich representations. This approach leverages the advanced comprehension and generative capabilities of LLMs, offering a scalable and efficient strategy for similarity identification across diverse datasets. We demonstrate the effectiveness of our method in identifying similar data points on multiple datasets. Additionally, our approach enables non-technical domain experts, such as fraud investigators or marketing operators, to quickly identify similar data points tailored to specific scenarios, demonstrating its utility in practical applications. In general, our results open new avenues for leveraging LLMs in data analysis across various domains | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 444,580 |
1811.12465 | Uncertainty propagation in neural networks for sparse coding | A novel method to propagate uncertainty through the soft-thresholding nonlinearity is proposed in this paper. At every layer the current distribution of the target vector is represented as a spike and slab distribution, which represents the probabilities of each variable being zero, or Gaussian-distributed. Using the proposed method of uncertainty propagation, the gradients of the logarithms of normalisation constants are derived, that can be used to update a weight distribution. A novel Bayesian neural network for sparse coding is designed utilising both the proposed method of uncertainty propagation and Bayesian inference algorithm. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 115,018 |
2410.17514 | SRA: A Novel Method to Improve Feature Embedding in Self-supervised
Learning for Histopathological Images | Self-supervised learning has become a cornerstone in various areas, particularly histopathological image analysis. Image augmentation plays a crucial role in self-supervised learning, as it generates variations in image samples. However, traditional image augmentation techniques often overlook the unique characteristics of histopathological images. In this paper, we propose a new histopathology-specific image augmentation method called stain reconstruction augmentation (SRA). We integrate our SRA with MoCo v3, a leading model in self-supervised contrastive learning, along with our additional contrastive loss terms, and call the new model SRA-MoCo v3. We demonstrate that our SRA-MoCo v3 always outperforms the standard MoCo v3 across various downstream tasks and achieves comparable or superior performance to other foundation models pre-trained on significantly larger histopathology datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 501,498 |
2212.11681 | Variational Quantum Soft Actor-Critic for Robotic Arm Control | Deep Reinforcement Learning is emerging as a promising approach for the continuous control task of robotic arm movement. However, the challenges of learning robust and versatile control capabilities are still far from being resolved for real-world applications, mainly because of two common issues of this learning paradigm: the exploration strategy and the slow learning speed, sometimes known as "the curse of dimensionality". This work aims at exploring and assessing the advantages of the application of Quantum Computing to one of the state-of-art Reinforcement Learning techniques for continuous control - namely Soft Actor-Critic. Specifically, the performance of a Variational Quantum Soft Actor-Critic on the movement of a virtual robotic arm has been investigated by means of digital simulations of quantum circuits. A quantum advantage over the classical algorithm has been found in terms of a significant decrease in the amount of required parameters for satisfactory model training, paving the way for further promising developments. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 337,850 |
1704.01792 | Neural Question Generation from Text: A Preliminary Study | Automatic question generation aims to generate questions from a text passage where the generated questions can be answered by certain sub-spans of the given passage. Traditional methods mainly use rigid heuristic rules to transform a sentence into related questions. In this work, we propose to apply the neural encoder-decoder model to generate meaningful and diverse questions from natural language sentences. The encoder reads the input text and the answer position, to produce an answer-aware input representation, which is fed to the decoder to generate an answer focused question. We conduct a preliminary study on neural question generation from text with the SQuAD dataset, and the experiment results show that our method can produce fluent and diverse questions. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 71,326 |
2205.01871 | UCL-Dehaze: Towards Real-world Image Dehazing via Unsupervised
Contrastive Learning | While the wisdom of training an image dehazing model on synthetic hazy data can alleviate the difficulty of collecting real-world hazy/clean image pairs, it brings the well-known domain shift problem. From a different yet new perspective, this paper explores contrastive learning with an adversarial training effort to leverage unpaired real-world hazy and clean images, thus bridging the gap between synthetic and real-world haze is avoided. We propose an effective unsupervised contrastive learning paradigm for image dehazing, dubbed UCL-Dehaze. Unpaired real-world clean and hazy images are easily captured, and will serve as the important positive and negative samples respectively when training our UCL-Dehaze network. To train the network more effectively, we formulate a new self-contrastive perceptual loss function, which encourages the restored images to approach the positive samples and keep away from the negative samples in the embedding space. Besides the overall network architecture of UCL-Dehaze, adversarial training is utilized to align the distributions between the positive samples and the dehazed images. Compared with recent image dehazing works, UCL-Dehaze does not require paired data during training and utilizes unpaired positive/negative data to better enhance the dehazing performance. We conduct comprehensive experiments to evaluate our UCL-Dehaze and demonstrate its superiority over the state-of-the-arts, even only 1,800 unpaired real-world images are used to train our network. Source code has been available at https://github.com/yz-wang/UCL-Dehaze. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 294,743 |
2108.01643 | Progressive Transmission using Recurrent Neural Networks | In this paper, we investigate a new machine learning-based transmission strategy called progressive transmission or ProgTr. In ProgTr, there are b variables that should be transmitted using at most T channel uses. The transmitter aims to send the data to the receiver as fast as possible and with as few channel uses as possible (as channel conditions permit) while the receiver refines its estimate after each channel use. We use recurrent neural networks as the building block of both the transmitter and receiver where the SNR is provided as an input that represents the channel conditions. To show how ProgTr works, the proposed scheme was simulated in different scenarios including single/multi-user settings, different channel conditions, and for both discrete and continuous input data. The results show that ProgTr can achieve better performance compared to conventional modulation methods. In addition to performance metrics such as BER, bit-wise mutual information is used to provide some interpretation to how the transmitter and receiver operate in ProgTr. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 249,091 |
1706.02633 | Real-valued (Medical) Time Series Generation with Recurrent Conditional
GANs | Generative Adversarial Networks (GANs) have shown remarkable success as a framework for training models to produce realistic-looking data. In this work, we propose a Recurrent GAN (RGAN) and Recurrent Conditional GAN (RCGAN) to produce realistic real-valued multi-dimensional time series, with an emphasis on their application to medical data. RGANs make use of recurrent neural networks in the generator and the discriminator. In the case of RCGANs, both of these RNNs are conditioned on auxiliary information. We demonstrate our models in a set of toy datasets, where we show visually and quantitatively (using sample likelihood and maximum mean discrepancy) that they can successfully generate realistic time-series. We also describe novel evaluation methods for GANs, where we generate a synthetic labelled training dataset, and evaluate on a real test set the performance of a model trained on the synthetic data, and vice-versa. We illustrate with these metrics that RCGANs can generate time-series data useful for supervised training, with only minor degradation in performance on real test data. This is demonstrated on digit classification from 'serialised' MNIST and by training an early warning system on a medical dataset of 17,000 patients from an intensive care unit. We further discuss and analyse the privacy concerns that may arise when using RCGANs to generate realistic synthetic medical time series data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 75,010 |
1606.00950 | Graph Clustering with Density-Cut | How can we find a good graph clustering of a real-world network, that allows insight into its underlying structure and also potential functions? In this paper, we introduce a new graph clustering algorithm Dcut from a density point of view. The basic idea is to envision the graph clustering as a density-cut problem, such that the vertices in the same cluster are densely connected and the vertices between clusters are sparsely connected. To identify meaningful clusters (communities) in a graph, a density-connected tree is first constructed in a local fashion. Owing to the density-connected tree, Dcut allows partitioning a graph into multiple densely tight-knit clusters directly. We demonstrate that our method has several attractive benefits: (a) Dcut provides an intuitive criterion to evaluate the goodness of a graph clustering in a more natural and precise way; (b) Built upon the density-connected tree, Dcut allows identifying the meaningful graph clusters of densely connected vertices efficiently; (c) The density-connected tree provides a connectivity map of vertices in a graph from a local density perspective. We systematically evaluate our new clustering approach on synthetic as well as real data to demonstrate its good performance. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 56,731 |
2408.04682 | ToolSandbox: A Stateful, Conversational, Interactive Evaluation
Benchmark for LLM Tool Use Capabilities | Recent large language models (LLMs) advancements sparked a growing research interest in tool assisted LLMs solving real-world challenges, which calls for comprehensive evaluation of tool-use capabilities. While previous works focused on either evaluating over stateless web services (RESTful API), based on a single turn user prompt, or an off-policy dialog trajectory, ToolSandbox includes stateful tool execution, implicit state dependencies between tools, a built-in user simulator supporting on-policy conversational evaluation and a dynamic evaluation strategy for intermediate and final milestones over an arbitrary trajectory. We show that open source and proprietary models have a significant performance gap, and complex tasks like State Dependency, Canonicalization and Insufficient Information defined in ToolSandbox are challenging even the most capable SOTA LLMs, providing brand-new insights into tool-use LLM capabilities. ToolSandbox evaluation framework is released at https://github.com/apple/ToolSandbox | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 479,490 |
2310.07572 | Impact of Label Types on Training SWIN Models with Overhead Imagery | Understanding the impact of data set design on model training and performance can help alleviate the costs associated with generating remote sensing and overhead labeled data. This work examined the impact of training shifted window transformers using bounding boxes and segmentation labels, where the latter are more expensive to produce. We examined classification tasks by comparing models trained with both target and backgrounds against models trained with only target pixels, extracted by segmentation labels. For object detection models, we compared performance using either label type when training. We found that the models trained on only target pixels do not show performance improvement for classification tasks, appearing to conflate background pixels in the evaluation set with target pixels. For object detection, we found that models trained with either label type showed equivalent performance across testing. We found that bounding boxes appeared to be sufficient for tasks that did not require more complex labels, such as object segmentation. Continuing work to determine consistency of this result across data types and model architectures could potentially result in substantial savings in generating remote sensing data sets for deep learning. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 399,022 |
2211.17042 | Spatio-Temporal Crop Aggregation for Video Representation Learning | We propose Spatio-temporal Crop Aggregation for video representation LEarning (SCALE), a novel method that enjoys high scalability at both training and inference time. Our model builds long-range video features by learning from sets of video clip-level features extracted with a pre-trained backbone. To train the model, we propose a self-supervised objective consisting of masked clip feature prediction. We apply sparsity to both the input, by extracting a random set of video clips, and to the loss function, by only reconstructing the sparse inputs. Moreover, we use dimensionality reduction by working in the latent space of a pre-trained backbone applied to single video clips. These techniques make our method not only extremely efficient to train but also highly effective in transfer learning. We demonstrate that our video representation yields state-of-the-art performance with linear, non-linear, and KNN probing on common action classification and video understanding datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 333,838 |
2404.06665 | Deep Generative Data Assimilation in Multimodal Setting | Robust integration of physical knowledge and data is key to improve computational simulations, such as Earth system models. Data assimilation is crucial for achieving this goal because it provides a systematic framework to calibrate model outputs with observations, which can include remote sensing imagery and ground station measurements, with uncertainty quantification. Conventional methods, including Kalman filters and variational approaches, inherently rely on simplifying linear and Gaussian assumptions, and can be computationally expensive. Nevertheless, with the rapid adoption of data-driven methods in many areas of computational sciences, we see the potential of emulating traditional data assimilation with deep learning, especially generative models. In particular, the diffusion-based probabilistic framework has large overlaps with data assimilation principles: both allows for conditional generation of samples with a Bayesian inverse framework. These models have shown remarkable success in text-conditioned image generation or image-controlled video synthesis. Likewise, one can frame data assimilation as observation-conditioned state calibration. In this work, we propose SLAMS: Score-based Latent Assimilation in Multimodal Setting. Specifically, we assimilate in-situ weather station data and ex-situ satellite imagery to calibrate the vertical temperature profiles, globally. Through extensive ablation, we demonstrate that SLAMS is robust even in low-resolution, noisy, and sparse data settings. To our knowledge, our work is the first to apply deep generative framework for multimodal data assimilation using real-world datasets; an important step for building robust computational simulators, including the next-generation Earth system models. Our code is available at: https://github.com/yongquan-qu/SLAMS | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 445,544 |
2401.05083 | Discrete-Time Stress Matrix-Based Formation Control of General Linear
Multi-Agent Systems | This paper considers the distributed leader-follower stress-matrix-based affine formation control problem of discrete-time linear multi-agent systems with static and dynamic leaders. In leader-follower multi-agent formation control, the aim is to drive a set of agents comprising leaders and followers to form any desired geometric pattern and simultaneously execute any required manoeuvre by controlling only a few agents denoted as leaders. Existing works in literature are mostly limited to the cases where the agents' inter-agent communications are either in the continuous-time settings or the sampled-data cases where the leaders are constrained to constant (or zero) velocities or accelerations. Here, we relax these constraints and study the discrete-time cases where the leaders can have stationary or time-varying velocities. We propose control laws in the study of different situations and provide some sufficient conditions to guarantee the overall system stability. Simulation study is used to demonstrate the efficacy of our proposed control laws. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | true | false | false | false | 420,641 |
2004.06201 | Reverse Engineering Configurations of Neural Text Generation Models | This paper seeks to develop a deeper understanding of the fundamental properties of neural text generations models. The study of artifacts that emerge in machine generated text as a result of modeling choices is a nascent research area. Previously, the extent and degree to which these artifacts surface in generated text has not been well studied. In the spirit of better understanding generative text models and their artifacts, we propose the new task of distinguishing which of several variants of a given model generated a piece of text, and we conduct an extensive suite of diagnostic tests to observe whether modeling choices (e.g., sampling methods, top-$k$ probabilities, model architectures, etc.) leave detectable artifacts in the text they generate. Our key finding, which is backed by a rigorous set of experiments, is that such artifacts are present and that different modeling choices can be inferred by observing the generated text alone. This suggests that neural text generators may be more sensitive to various modeling choices than previously thought. | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | 172,438 |
2407.00911 | Deep Image-to-Recipe Translation | The modern saying, "You Are What You Eat" resonates on a profound level, reflecting the intricate connection between our identities and the food we consume. Our project, Deep Image-to-Recipe Translation, is an intersection of computer vision and natural language generation that aims to bridge the gap between cherished food memories and the art of culinary creation. Our primary objective involves predicting ingredients from a given food image. For this task, we first develop a custom convolutional network and then compare its performance to a model that leverages transfer learning. We pursue an additional goal of generating a comprehensive set of recipe steps from a list of ingredients. We frame this process as a sequence-to-sequence task and develop a recurrent neural network that utilizes pre-trained word embeddings. We address several challenges of deep learning including imbalanced datasets, data cleaning, overfitting, and hyperparameter selection. Our approach emphasizes the importance of metrics such as Intersection over Union (IoU) and F1 score in scenarios where accuracy alone might be misleading. For our recipe prediction model, we employ perplexity, a commonly used and important metric for language models. We find that transfer learning via pre-trained ResNet-50 weights and GloVe embeddings provide an exceptional boost to model performance, especially when considering training resource constraints. Although we have made progress on the image-to-recipe translation, there is an opportunity for future exploration with advancements in model architectures, dataset scalability, and enhanced user interaction. | false | false | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | 469,054 |
2309.14736 | The Tight Upper Bound for the Size of Single Deletion Error Correcting
Codes in Dimension 11 | A single deletion error correcting code (SDECC) is a set of fixed-length sequences consisting of two types of symbols, 0 and 1, such that the original sequence can be recovered for at most one deletion error. The upper bound for the size of SDECC is expected to be equal to the size of Varshamov-Tenengolts (VT) code, and this conjecture had been shown to be true when the code length is ten or less. In this paper, we discuss a method for calculating this upper bound by providing an integer linear programming solver with several linear constraints. As a new result, we obtained that the tight upper bound for the size of a single deletion error correcting code in dimension 11 is 172. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 394,718 |
2309.11622 | Offline and Online Use of Interval and Set-Based Approaches for Control
and State Estimation: A Selection of Methodological Approaches and Their
Application | Control and state estimation procedures need to be robust against imprecisely known parameters, uncertainty in initial conditions, and external disturbances. Interval methods and other set-based techniques form the basis for the implementation of powerful approaches that can be used to identify parameters of dynamic system models in the presence of the aforementioned types of uncertainty. Moreover, they are applicable to a verified feasibility and stability analysis of controllers and state estimators. In addition to these approaches which are typically used offline for analysis of system models designed with classical floating point procedures, interval and set-based methods have also been developed in recent years, which allow to directly solve the associated design tasks and to implement reliable techniques that are applicable online, i.e., during system operation. The latter approaches include set-based model predictive control, online parameter adaptation techniques for nonlinear variable-structure and backstepping controllers, interval observers, and fault diagnosis techniques. This paper provides an overview of the methodological background and reviews numerous practical applications for which interval and other set-valued approaches have been employed successfully. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 393,471 |
1304.3441 | Machine Generalization and Human Categorization: An
Information-Theoretic View | In designing an intelligent system that must be able to explain its reasoning to a human user, or to provide generalizations that the human user finds reasonable, it may be useful to take into consideration psychological data on what types of concepts and categories people naturally use. The psychological literature on concept learning and categorization provides strong evidence that certain categories are more easily learned, recalled, and recognized than others. We show here how a measure of the informational value of a category predicts the results of several important categorization experiments better than standard alternative explanations. This suggests that information-based approaches to machine generalization may prove particularly useful and natural for human users of the systems. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 23,880 |
2407.14237 | Hyper-Heuristics Can Profit From Global Variation Operators | In recent work, Lissovoi, Oliveto, and Warwicker (Artificial Intelligence (2023)) proved that the Move Acceptance Hyper-Heuristic (MAHH) leaves the local optimum of the multimodal CLIFF benchmark with remarkable efficiency. The $O(n^3)$ runtime of the MAHH, for almost all cliff widths $d\ge 2,$ is significantly better than the $\Theta(n^d)$ runtime of simple elitist evolutionary algorithms (EAs) on CLIFF. In this work, we first show that this advantage is specific to the CLIFF problem and does not extend to the JUMP benchmark, the most prominent multi-modal benchmark in the theory of randomized search heuristics. We prove that for any choice of the MAHH selection parameter $p$, the expected runtime of the MAHH on a JUMP function with gap size $m = O(n^{1/2})$ is at least $\Omega(n^{2m-1} / (2m-1)!)$. This is significantly slower than the $O(n^m)$ runtime of simple elitist EAs. Encouragingly, we also show that replacing the local one-bit mutation operator in the MAHH with the global bit-wise mutation operator, commonly used in EAs, yields a runtime of $\min\{1, O(\frac{e\ln(n)}{m})^m\} \, O(n^m)$ on JUMP functions. This is at least as good as the runtime of simple elitist EAs. For larger values of $m$, this result proves an asymptotic performance gain over simple EAs. As our proofs reveal, the MAHH profits from its ability to walk through the valley of lower objective values in moderate-size steps, always accepting inferior solutions. This is the first time that such an optimization behavior is proven via mathematical means. Generally, our result shows that combining two ways of coping with local optima, global mutation and accepting inferior solutions, can lead to considerable performance gains. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | true | 474,706 |
1711.03240 | Cellular Offloading via Downlink Cache Placement | In this paper, the downlink file transmission within a finite lifetime is optimized with the assistance of wireless cache nodes. Specifically, the number of requests within the lifetime of one file is modeled as a Poisson point process. The base station multicasts files to downlink users and the selected the cache nodes, so that the cache nodes can help to forward the files in the next file request. Thus we formulate the downlink transmission as a Markov decision process with random number of stages, where transmission power and time on each transmission are the control policy. Due to random number of file transmissions, we first proposed a revised Bellman's equation, where the optimal control policy can be derived. In order to address the prohibitively huge state space, we also introduce a low-complexity sub-optimal solution based on an linear approximation of the value function. The approximated value function can be calculated analytically, so that conventional numerical value iteration can be eliminated. Moreover, the gap between the approximated value function and the real value function is bounded analytically. It is shown by simulation that, with the approximated MDP approach, the proposed algorithm can significantly reduce the resource consumption at the base station. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 84,179 |
1702.08524 | Local Synchronization of Sampled-Data Systems on Lie Groups | We present a smooth distributed nonlinear control law for local synchronization of identical driftless kinematic agents on a Cartesian product of matrix Lie groups with a connected communication graph. If the agents are initialized sufficiently close to one another, then synchronization is achieved exponentially fast. We first analyze the special case of commutative Lie groups and show that in exponential coordinates, the closed-loop dynamics are linear. We characterize all equilibria of the network and, in the case of an unweighted, complete graph, characterize the settling time and conditions for deadbeat performance. Using the Baker-Campbell-Hausdorff theorem, we show that, in a neighbourhood of the identity element, all results generalize to arbitrary matrix Lie groups. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 69,000 |
1806.06973 | On the Bias of Reed-Muller Codes over Odd Prime Fields | We study the bias of random bounded-degree polynomials over odd prime fields and show that, with probability exponentially close to 1, such polynomials have exponentially small bias. This also yields an exponential tail bound on the weight distribution of Reed-Muller codes over odd prime fields. These results generalize bounds of Ben-Eliezer, Hod, and Lovett who proved similar results over $\mathbb{F}_2$. A key to our bounds is the proof of a new precise extremal property for the rank of sub-matrices of the generator matrices of Reed-Muller codes over odd prime fields. This extremal property is a substantial extension of an extremal property shown by Keevash and Sudakov for the case of $\mathbb{F}_2$. Our exponential tail bounds on the bias can be used to derive exponential lower bounds on the time for space-bounded learning of bounded-degree polynomials from their evaluations over odd prime fields. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 100,804 |
2412.16326 | When Worse is Better: Navigating the compression-generation tradeoff in
visual tokenization | Current image generation methods, such as latent diffusion and discrete token-based generation, depend on a two-stage training approach. In stage 1, an auto-encoder is trained to compress an image into a latent space; in stage 2, a generative model is trained to learn a distribution over that latent space. Most work focuses on maximizing stage 1 performance independent of stage 2, assuming better reconstruction always leads to better generation. However, we show this is not strictly true. Smaller stage 2 models can benefit from more compressed stage 1 latents even if reconstruction performance worsens, showing a fundamental trade-off between compression and generation modeling capacity. To better optimize this trade-off, we introduce Causally Regularized Tokenization (CRT), which uses knowledge of the stage 2 generation modeling procedure to embed useful inductive biases in stage 1 latents. This regularization makes stage 1 reconstruction performance worse, but makes stage 2 generation performance better by making the tokens easier to model: we are able to improve compute efficiency 2-3$\times$ over baseline and match state-of-the-art discrete autoregressive ImageNet generation (2.18 FID) with less than half the tokens per image (256 vs. 576) and a fourth the total model parameters (775M vs. 3.1B) as the previous SOTA (LlamaGen). | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 519,466 |
2001.10944 | Exact Blind Community Detection from Signals on Multiple Graphs | Networks and data supported on graphs have become ubiquitous in the sciences and engineering. This paper studies the 'blind' community detection problem, where we seek to infer the community structure of a graph model given the observation of independent graph signals on a set of nodes whose connections are unknown. We model each observation as filtered white noise, where the underlying network structure varies with every observation. These varying network structures are modeled as independent realizations of a latent planted partition model (PPM), justifying our assumption of a constant underlying community structure over all observations. Under certain conditions on the graph filter and PPM parameters, we propose algorithms for determining (i) the number of latent communities and (ii) the associated partitions of the PPM. We then prove statistical guarantees in the asymptotic and non-asymptotic sampling cases. Numerical experiments on real and synthetic data demonstrate the efficacy of our algorithms. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 161,940 |
2012.00437 | A Unified Structure for Efficient RGB and RGB-D Salient Object Detection | Salient object detection (SOD) has been well studied in recent years, especially using deep neural networks. However, SOD with RGB and RGB-D images is usually treated as two different tasks with different network structures that need to be designed specifically. In this paper, we proposed a unified and efficient structure with a cross-attention context extraction (CRACE) module to address both tasks of SOD efficiently. The proposed CRACE module receives and appropriately fuses two (for RGB SOD) or three (for RGB-D SOD) inputs. The simple unified feature pyramid network (FPN)-like structure with CRACE modules conveys and refines the results under the multi-level supervisions of saliency and boundaries. The proposed structure is simple yet effective; the rich context information of RGB and depth can be appropriately extracted and fused by the proposed structure efficiently. Experimental results show that our method outperforms other state-of-the-art methods in both RGB and RGB-D SOD tasks on various datasets and in terms of most metrics. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 209,137 |
1809.02266 | BubGAN: Bubble Generative Adversarial Networks for Synthesizing
Realistic Bubbly Flow Images | Bubble segmentation and size detection algorithms have been developed in recent years for their high efficiency and accuracy in measuring bubbly two-phase flows. In this work, we proposed an architecture called bubble generative adversarial networks (BubGAN) for the generation of realistic synthetic images which could be further used as training or benchmarking data for the development of advanced image processing algorithms. The BubGAN is trained initially on a labeled bubble dataset consisting of ten thousand images. By learning the distribution of these bubbles, the BubGAN can generate more realistic bubbles compared to the conventional models used in the literature. The trained BubGAN is conditioned on bubble feature parameters and has full control of bubble properties in terms of aspect ratio, rotation angle, circularity and edge ratio. A million bubble dataset is pre-generated using the trained BubGAN. One can then assemble realistic bubbly flow images using this dataset and associated image processing tool. These images contain detailed bubble information, therefore do not require additional manual labeling. This is more useful compared with the conventional GAN which generates images without labeling information. The tool could be used to provide benchmarking and training data for existing image processing algorithms and to guide the future development of bubble detecting algorithms. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 107,009 |
1007.1938 | Affine equivalence of cubic homogeneous rotation symmetric Boolean
functions | Homogeneous rotation symmetric Boolean functions have been extensively studied in recent years because of their applications in cryptography. Little is known about the basic question of when two such functions are affine equivalent. The simplest case of quadratic rotation symmetric functions which are generated by cyclic permutations of the variables in a single monomial was only settled in 2009. This paper studies the much more complicated cubic case for such functions. A new concept of \emph{patterns} is introduced, by means of which the structure of the smallest group G_n, whose action on the set of all such cubic functions in $n$ variables gives the affine equivalence classes for these functions under permutation of the variables, is determined. We conjecture that the equivalence classes are the same if all nonsingular affine transformations, not just permutations, are allowed. This conjecture is verified if n < 22. Our method gives much more information about the equivalence classes; for example, in this paper we give a complete description of the equivalence classes when n is a prime or a power of 3. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 7,043 |
2205.06995 | Comparative evaluation of community-aware centrality measures | Influential nodes play a critical role in boosting or curbing spreading phenomena in complex networks. Numerous centrality measures have been proposed for identifying and ranking the nodes according to their importance. Classical centrality measures rely on various local or global properties of the nodes. They do not take into account the network community structure. Recently, a growing number of researches have shifted to community-aware centrality measures. Indeed, it is a ubiquitous feature in a vast majority of real-world networks. In the literature, the focus is on designing community-aware centrality measures. However, up to now, there is no systematic evaluation of their effectiveness. This study fills this gap. It allows answering which community-aware centrality measure should be used in practical situations. We investigate seven influential community-aware centrality measures in an epidemic spreading process scenario using the Susceptible-Infected-Recovered (SIR) model on a set of fifteen real-world networks. Results show that generally, the correlation between community-aware centrality measures is low. Furthermore, in a multiple-spreader problem, when resources are available, targeting distant hubs using Modularity Vitality is more effective. However, with limited resources, diffusion expands better through bridges, especially in networks with a medium or strong community structure. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 296,437 |
2404.09703 | AI Competitions and Benchmarks: Dataset Development | Machine learning is now used in many applications thanks to its ability to predict, generate, or discover patterns from large quantities of data. However, the process of collecting and transforming data for practical use is intricate. Even in today's digital era, where substantial data is generated daily, it is uncommon for it to be readily usable; most often, it necessitates meticulous manual data preparation. The haste in developing new models can frequently result in various shortcomings, potentially posing risks when deployed in real-world scenarios (eg social discrimination, critical failures), leading to the failure or substantial escalation of costs in AI-based projects. This chapter provides a comprehensive overview of established methodological tools, enriched by our practical experience, in the development of datasets for machine learning. Initially, we develop the tasks involved in dataset development and offer insights into their effective management (including requirements, design, implementation, evaluation, distribution, and maintenance). Then, we provide more details about the implementation process which includes data collection, transformation, and quality evaluation. Finally, we address practical considerations regarding dataset distribution and maintenance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 446,799 |
1303.6249 | A Derivation of the Source-Channel Error Exponent using Non-identical
Product Distributions | This paper studies the random-coding exponent of joint source-channel coding for a scheme where source messages are assigned to disjoint subsets (referred to as classes), and codewords are independently generated according to a distribution that depends on the class index of the source message. For discrete memoryless systems, two optimally chosen classes and product distributions are found to be sufficient to attain the sphere-packing exponent in those cases where it is tight. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 23,259 |
1402.3926 | Sparse Coding Approach for Multi-Frame Image Super Resolution | An image super-resolution method from multiple observation of low-resolution images is proposed. The method is based on sub-pixel accuracy block matching for estimating relative displacements of observed images, and sparse signal representation for estimating the corresponding high-resolution image. Relative displacements of small patches of observed low-resolution images are accurately estimated by a computationally efficient block matching method. Since the estimated displacements are also regarded as a warping component of image degradation process, the matching results are directly utilized to generate low-resolution dictionary for sparse image representation. The matching scores of the block matching are used to select a subset of low-resolution patches for reconstructing a high-resolution patch, that is, an adaptive selection of informative low-resolution images is realized. When there is only one low-resolution image, the proposed method works as a single-frame super-resolution method. The proposed method is shown to perform comparable or superior to conventional single- and multi-frame super-resolution methods through experiments using various real-world datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 30,916 |
1904.09048 | Automated Focal Loss for Image based Object Detection | Current state-of-the-art object detection algorithms still suffer the problem of imbalanced distribution of training data over object classes and background. Recent work introduced a new loss function called focal loss to mitigate this problem, but at the cost of an additional hyperparameter. Manually tuning this hyperparameter for each training task is highly time-consuming. With automated focal loss we introduce a new loss function which substitutes this hyperparameter by a parameter that is automatically adapted during the training progress and controls the amount of focusing on hard training examples. We show on the COCO benchmark that this leads to an up to 30% faster training convergence. We further introduced a focal regression loss which on the more challenging task of 3D vehicle detection outperforms other loss functions by up to 1.8 AOS and can be used as a value range independent metric for regression. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 128,259 |
2310.08114 | Multi-Modal Sensor Fusion and Object Tracking for Autonomous Racing | Reliable detection and tracking of surrounding objects are indispensable for comprehensive motion prediction and planning of autonomous vehicles. Due to the limitations of individual sensors, the fusion of multiple sensor modalities is required to improve the overall detection capabilities. Additionally, robust motion tracking is essential for reducing the effect of sensor noise and improving state estimation accuracy. The reliability of the autonomous vehicle software becomes even more relevant in complex, adversarial high-speed scenarios at the vehicle handling limits in autonomous racing. In this paper, we present a modular multi-modal sensor fusion and tracking method for high-speed applications. The method is based on the Extended Kalman Filter (EKF) and is capable of fusing heterogeneous detection inputs to track surrounding objects consistently. A novel delay compensation approach enables to reduce the influence of the perception software latency and to output an updated object list. It is the first fusion and tracking method validated in high-speed real-world scenarios at the Indy Autonomous Challenge 2021 and the Autonomous Challenge at CES (AC@CES) 2022, proving its robustness and computational efficiency on embedded systems. It does not require any labeled data and achieves position tracking residuals below 0.1 m. The related code is available as open-source software at https://github.com/TUMFTM/FusionTracking. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 399,273 |
2305.08778 | Copula Variational LSTM for High-dimensional Cross-market Multivariate
Dependence Modeling | We address an important yet challenging problem - modeling high-dimensional dependencies across multivariates such as financial indicators in heterogeneous markets. In reality, a market couples and influences others over time, and the financial variables of a market are also coupled. We make the first attempt to integrate variational sequential neural learning with copula-based dependence modeling to characterize both temporal observable and latent variable-based dependence degrees and structures across non-normal multivariates. Our variational neural network WPVC-VLSTM models variational sequential dependence degrees and structures across multivariate time series by variational long short-term memory networks and regular vine copula. The regular vine copula models nonnormal and long-range distributional couplings across multiple dynamic variables. WPVC-VLSTM is verified in terms of both technical significance and portfolio forecasting performance. It outperforms benchmarks including linear models, stochastic volatility models, deep neural networks, and variational recurrent networks in cross-market portfolio forecasting. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 364,404 |
2005.05579 | Data-driven Algorithm for Scheduling with Total Tardiness | In this paper, we investigate the use of deep learning for solving a classical NP-Hard single machine scheduling problem where the criterion is to minimize the total tardiness. Instead of designing an end-to-end machine learning model, we utilize well known decomposition of the problem and we enhance it with a data-driven approach. We have designed a regressor containing a deep neural network that learns and predicts the criterion of a given set of jobs. The network acts as a polynomial-time estimator of the criterion that is used in a single-pass scheduling algorithm based on Lawler's decomposition theorem. Essentially, the regressor guides the algorithm to select the best position for each job. The experimental results show that our data-driven approach can efficiently generalize information from the training phase to significantly larger instances (up to 350 jobs) where it achieves an optimality gap of about 0.5%, which is four times less than the gap of the state-of-the-art NBR heuristic. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 176,774 |
1705.00770 | Galois LCD Codes over Finite Fields | In this paper, we study the complementary dual codes in more general setting (which are called Galois LCD codes) by a uniform method. A necessary and sufficient condition for linear codes to be Galois LCD codes is determined, and constacyclic codes to be Galois LCD codes are characterized. Some illustrative examples which constacyclic codes are Galois LCD MDS codes are provided as well. In particular, we study Hermitian LCD constacyclic codes. Finally, we present a construction of a class of Hermitian LCD codes which are also MDS codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 72,746 |
2103.09085 | Exact Sparse Orthogonal Dictionary Learning | Over the past decade, learning a dictionary from input images for sparse modeling has been one of the topics which receive most research attention in image processing and compressed sensing. Most existing dictionary learning methods consider an over-complete dictionary, such as the K-SVD method, which may result in high mutual incoherence and therefore has a negative impact in recognition. On the other side, the sparse codes are usually optimized by adding the $\ell_0$ or $\ell_1$-norm penalty, but with no strict sparsity guarantee. In this paper, we propose an orthogonal dictionary learning model which can obtain strictly sparse codes and orthogonal dictionary with global sequence convergence guarantee. We find that our method can result in better denoising results than over-complete dictionary based learning methods, and has the additional advantage of high computation efficiency. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 225,075 |
1709.06047 | Bayesian Optimization Using Domain Knowledge on the ATRIAS Biped | Controllers in robotics often consist of expert-designed heuristics, which can be hard to tune in higher dimensions. It is typical to use simulation to learn these parameters, but controllers learned in simulation often don't transfer to hardware. This necessitates optimization directly on hardware. However, collecting data on hardware can be expensive. This has led to a recent interest in adapting data-efficient learning techniques to robotics. One popular method is Bayesian Optimization (BO), a sample-efficient black-box optimization scheme, but its performance typically degrades in higher dimensions. We aim to overcome this problem by incorporating domain knowledge to reduce dimensionality in a meaningful way, with a focus on bipedal locomotion. In previous work, we proposed a transformation based on knowledge of human walking that projected a 16-dimensional controller to a 1-dimensional space. In simulation, this showed enhanced sample efficiency when optimizing human-inspired neuromuscular walking controllers on a humanoid model. In this paper, we present a generalized feature transform applicable to non-humanoid robot morphologies and evaluate it on the ATRIAS bipedal robot -- in simulation and on hardware. We present three different walking controllers; two are evaluated on the real robot. Our results show that this feature transform captures important aspects of walking and accelerates learning on hardware and simulation, as compared to traditional BO. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 81,015 |
1302.3606 | On Separation Criterion and Recovery Algorithm for Chain Graphs | Chain graphs give a natural unifying point of view on Markov and Bayesian networks and enlarge the potential of graphical models for description of conditional independence structures. In the paper a direct graphical separation criterion for chain graphs, called c-separation, which generalizes the d-separation criterion for Bayesian networks is introduced (recalled). It is equivalent to the classic moralization criterion for chain graphs and complete in sense that for every chain graph there exists a probability distribution satisfying exactly conditional independencies derivable from the chain graph by the c-separation criterion. Every class of Markov equivalent chain graphs can be uniquely described by a natural representative, called the largest chain graph. A recovery algorithm, which on basis of the (conditional) dependency model induced by an unknown chain graph finds the corresponding largest chain graph, is presented. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 22,072 |
2212.11727 | A topological analysis of cointegrated data: a Z24 Bridge case study | The paper studies the topological changes from before and after cointegration, for the natural frequencies of the Z24 Bridge. The second natural frequency is known to be nonlinear in temperature, and this will serve as the main focal point of this work. Cointegration is a method of normalising time series data with respect to one another - often strongly-correlated time series. Cointegration is used in this paper to remove effects from Environmental and Operational Variations, by cointegrating the first four natural frequencies for the Z24 Bridge data. The temperature effects on the natural frequency data are clearly visible within the data, and it is desirable, for the purposes of structural health monitoring, that these effects are removed. The univariate time series are embedded in higher-dimensional space, such that interesting topologies are formed. Topological data analysis is used to analyse the raw time series, and the cointegrated equivalents. A standard topological data analysis pipeline is enacted, where simplicial complexes are constructed from the embedded point clouds. Topological properties are then calculated from the simplicial complexes; such as the persistent homology. The persistent homology is then analysed, to determine the topological structure of all the time series. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 337,864 |
2412.09229 | UADet: A Remarkably Simple Yet Effective Uncertainty-Aware Open-Set
Object Detection Framework | We tackle the challenging problem of Open-Set Object Detection (OSOD), which aims to detect both known and unknown objects in unlabelled images. The main difficulty arises from the absence of supervision for these unknown classes, making it challenging to distinguish them from the background. Existing OSOD detectors either fail to properly exploit or inadequately leverage the abundant unlabeled unknown objects in training data, restricting their performance. To address these limitations, we propose UADet, an Uncertainty-Aware Open-Set Object Detector that considers appearance and geometric uncertainty. By integrating these uncertainty measures, UADet effectively reduces the number of unannotated instances incorrectly utilized or omitted by previous methods. Extensive experiments on OSOD benchmarks demonstrate that UADet substantially outperforms previous state-of-the-art (SOTA) methods in detecting both known and unknown objects, achieving a 1.8x improvement in unknown recall while maintaining high performance on known classes. When extended to Open World Object Detection (OWOD), our method shows significant advantages over the current SOTA method, with average improvements of 13.8% and 6.9% in unknown recall on M-OWODB and S-OWODB benchmarks, respectively. Extensive results validate the effectiveness of our uncertainty-aware approach across different open-set scenarios. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 516,404 |
2308.12842 | Text Similarity from Image Contents using Statistical and Semantic
Analysis Techniques | Plagiarism detection is one of the most researched areas among the Natural Language Processing(NLP) community. A good plagiarism detection covers all the NLP methods including semantics, named entities, paraphrases etc. and produces detailed plagiarism reports. Detection of Cross Lingual Plagiarism requires deep knowledge of various advanced methods and algorithms to perform effective text similarity checking. Nowadays the plagiarists are also advancing themselves from hiding the identity from being catch in such offense. The plagiarists are bypassed from being detected with techniques like paraphrasing, synonym replacement, mismatching citations, translating one language to another. Image Content Plagiarism Detection (ICPD) has gained importance, utilizing advanced image content processing to identify instances of plagiarism to ensure the integrity of image content. The issue of plagiarism extends beyond textual content, as images such as figures, graphs, and tables also have the potential to be plagiarized. However, image content plagiarism detection remains an unaddressed challenge. Therefore, there is a critical need to develop methods and systems for detecting plagiarism in image content. In this paper, the system has been implemented to detect plagiarism form contents of Images such as Figures, Graphs, Tables etc. Along with statistical algorithms such as Jaccard and Cosine, introducing semantic algorithms such as LSA, BERT, WordNet outperformed in detecting efficient and accurate plagiarism. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 387,688 |
2307.00228 | InferTurbo: A Scalable System for Boosting Full-graph Inference of Graph
Neural Network over Huge Graphs | GNN inference is a non-trivial task, especially in industrial scenarios with giant graphs, given three main challenges, i.e., scalability tailored for full-graph inference on huge graphs, inconsistency caused by stochastic acceleration strategies (e.g., sampling), and the serious redundant computation issue. To address the above challenges, we propose a scalable system named InferTurbo to boost the GNN inference tasks in industrial scenarios. Inspired by the philosophy of ``think-like-a-vertex", a GAS-like (Gather-Apply-Scatter) schema is proposed to describe the computation paradigm and data flow of GNN inference. The computation of GNNs is expressed in an iteration manner, in which a vertex would gather messages via in-edges and update its state information by forwarding an associated layer of GNNs with those messages and then send the updated information to other vertexes via out-edges. Following the schema, the proposed InferTurbo can be built with alternative backends (e.g., batch processing system or graph computing system). Moreover, InferTurbo introduces several strategies like shadow-nodes and partial-gather to handle nodes with large degrees for better load balancing. With InferTurbo, GNN inference can be hierarchically conducted over the full graph without sampling and redundant computation. Experimental results demonstrate that our system is robust and efficient for inference tasks over graphs containing some hub nodes with many adjacent edges. Meanwhile, the system gains a remarkable performance compared with the traditional inference pipeline, and it can finish a GNN inference task over a graph with tens of billions of nodes and hundreds of billions of edges within 2 hours. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 376,924 |
2109.03168 | Locally Recoverable Streaming Codes for Packet-Erasure Recovery | Streaming codes are a class of packet-level erasure codes that are designed with the goal of ensuring recovery in low-latency fashion, of erased packets over a communication network. It is well-known in the streaming code literature, that diagonally embedding codewords of a $[\tau+1,\tau+1-a]$ Maximum Distance Separable (MDS) code within the packet stream, leads to rate-optimal streaming codes capable of recovering from $a$ arbitrary packet erasures, under a strict decoding delay constraint $\tau$. Thus MDS codes are geared towards the efficient handling of the worst-case scenario corresponding to the occurrence of $a$ erasures. In the present paper, we have an increased focus on the efficient handling of the most-frequent erasure patterns. We study streaming codes which in addition to recovering from $a>1$ arbitrary packet erasures under a decoding delay $\tau$, have the ability to handle the more common occurrence of a single-packet erasure, while incurring smaller delay $r<\tau$. We term these codes as $(a,\tau,r)$ locally recoverable streaming codes (LRSCs), since our single-erasure recovery requirement is similar to the requirement of locality in a coded distributed storage system. We characterize the maximum possible rate of an LRSC by presenting rate-optimal constructions for all possible parameters $\{a,\tau,r\}$. Although the rate-optimal LRSC construction provided in this paper requires large field size, the construction is explicit. It is also shown that our $(a,\tau=a(r+1)-1,r)$ LRSC construction provides the additional guarantee of recovery from the erasure of $h, 1 \leq h \leq a$, packets, with delay $h(r+1)-1$. The construction thus offers graceful degradation in decoding delay with increasing number of erasures. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 253,978 |
2410.17402 | Invisible Manipulation Deep Reinforcement Learning Enhanced Stealthy
Attacks on Battery Energy Management Systems | This paper introduces "invisible manipulation," an innovative cyber-attack mechanism achieved through strategically timed stealthy false data injection attacks (SFDIAs). By stealthily manipulating measurements of a critical asset prior to the target time period, the attacker can subtly guide the engineering system toward a predetermined operational state without detection. Using the battery energy management system (BEMS) as a case study, we employ deep reinforcement learning (DRL) to generate synthetic measurements, such as battery voltage and current, that align closely with actual measurements. These synthetic measurements, falling within the acceptable error margin of residual-based bad data detection algorithm provided by state estimation, can evade detection and mislead Extended Kalman-filter-based State of Charge estimation. Subsequently, considering the deceptive data as valid inputs, the BEMS will operate the BESS towards the attacker desired operational states when the targeted time period come. The use of the DRL-based scheme allows us to covert an online optimization problem into an offline training process, thereby alleviating the computational burden for real-time implementation. Comprehensive testing on a high-fidelity microgrid real-time simulation testbed validates the effectiveness and adaptability of the proposed methods in achieving different attack objectives. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 501,443 |
1808.08378 | Fusion++: Volumetric Object-Level SLAM | We propose an online object-level SLAM system which builds a persistent and accurate 3D graph map of arbitrary reconstructed objects. As an RGB-D camera browses a cluttered indoor scene, Mask-RCNN instance segmentations are used to initialise compact per-object Truncated Signed Distance Function (TSDF) reconstructions with object size-dependent resolutions and a novel 3D foreground mask. Reconstructed objects are stored in an optimisable 6DoF pose graph which is our only persistent map representation. Objects are incrementally refined via depth fusion, and are used for tracking, relocalisation and loop closure detection. Loop closures cause adjustments in the relative pose estimates of object instances, but no intra-object warping. Each object also carries semantic information which is refined over time and an existence probability to account for spurious instance predictions. We demonstrate our approach on a hand-held RGB-D sequence from a cluttered office scene with a large number and variety of object instances, highlighting how the system closes loops and makes good use of existing objects on repeated loops. We quantitatively evaluate the trajectory error of our system against a baseline approach on the RGB-D SLAM benchmark, and qualitatively compare reconstruction quality of discovered objects on the YCB video dataset. Performance evaluation shows our approach is highly memory efficient and runs online at 4-8Hz (excluding relocalisation) despite not being optimised at the software level. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 105,935 |
2406.13560 | Lexically Grounded Subword Segmentation | We present three innovations in tokenization and subword segmentation. First, we propose to use unsupervised morphological analysis with Morfessor as pre-tokenization. Second, we present an algebraic method for obtaining subword embeddings grounded in a word embedding space. Based on that, we design a novel subword segmentation algorithm that uses the embeddings, ensuring that the procedure considers lexical meaning. Third, we introduce an efficient segmentation algorithm based on a subword bigram model that can be initialized with the lexically aware segmentation method to avoid using Morfessor and large embedding tables at inference time. We evaluate the proposed approaches using two intrinsic metrics and measure their performance on two downstream tasks: part-of-speech tagging and machine translation. Our experiments show significant improvements in the morphological plausibility of the segmentation when evaluated using segmentation precision on morpheme boundaries and improved R\'enyi efficiency in 8 languages. Although the proposed tokenization methods do not have a large impact on automatic translation quality, we observe consistent performance gains in the arguably more morphological task of part-of-speech tagging. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 465,899 |
2403.08262 | BiTT: Bi-directional Texture Reconstruction of Interacting Two Hands
from a Single Image | Creating personalized hand avatars is important to offer a realistic experience to users on AR / VR platforms. While most prior studies focused on reconstructing 3D hand shapes, some recent work has tackled the reconstruction of hand textures on top of shapes. However, these methods are often limited to capturing pixels on the visible side of a hand, requiring diverse views of the hand in a video or multiple images as input. In this paper, we propose a novel method, BiTT(Bi-directional Texture reconstruction of Two hands), which is the first end-to-end trainable method for relightable, pose-free texture reconstruction of two interacting hands taking only a single RGB image, by three novel components: 1) bi-directional (left $\leftrightarrow$ right) texture reconstruction using the texture symmetry of left / right hands, 2) utilizing a texture parametric model for hand texture recovery, and 3) the overall coarse-to-fine stage pipeline for reconstructing personalized texture of two interacting hands. BiTT first estimates the scene light condition and albedo image from an input image, then reconstructs the texture of both hands through the texture parametric model and bi-directional texture reconstructor. In experiments using InterHand2.6M and RGB2Hands datasets, our method significantly outperforms state-of-the-art hand texture reconstruction methods quantitatively and qualitatively. The code is available at https://github.com/yunminjin2/BiTT | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 437,261 |
2112.11243 | Projected Sliced Wasserstein Autoencoder-based Hyperspectral Images
Anomaly Detection | Anomaly detection (AD) has been an active research area in various domains. Yet, the increasing data scale, complexity, and dimension turn the traditional methods into challenging. Recently, the deep generative model, such as the variational autoencoder (VAE), has sparked a renewed interest in the AD problem. However, the probability distribution divergence used as the regularization is too strong, which causes the model cannot capture the manifold of the true data. In this paper, we propose the Projected Sliced Wasserstein (PSW) autoencoder-based anomaly detection method. Rooted in the optimal transportation, the PSW distance is a weaker distribution measure compared with $f$-divergence. In particular, the computation-friendly eigen-decomposition method is leveraged to find the principal component for slicing the high-dimensional data. In this case, the Wasserstein distance can be calculated with the closed-form, even the prior distribution is not Gaussian. Comprehensive experiments conducted on various real-world hyperspectral anomaly detection benchmarks demonstrate the superior performance of the proposed method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 272,654 |
2406.14185 | Failure-Resilient Distributed Inference with Model Compression over
Heterogeneous Edge Devices | The distributed inference paradigm enables the computation workload to be distributed across multiple devices, facilitating the implementations of deep learning based intelligent services on extremely resource-constrained Internet of Things (IoT) scenarios. Yet it raises great challenges to perform complicated inference tasks relying on a cluster of IoT devices that are heterogeneous in their computing/communication capacity and prone to crash or timeout failures. In this paper, we present RoCoIn, a robust cooperative inference mechanism for locally distributed execution of deep neural network-based inference tasks over heterogeneous edge devices. It creates a set of independent and compact student models that are learned from a large model using knowledge distillation for distributed deployment. In particular, the devices are strategically grouped to redundantly deploy and execute the same student model such that the inference process is resilient to any local failures, while a joint knowledge partition and student model assignment scheme are designed to minimize the response latency of the distributed inference system in the presence of devices with diverse capacities. Extensive simulations are conducted to corroborate the superior performance of our RoCoIn for distributed inference compared to several baselines, and the results demonstrate its efficacy in timely inference and failure resiliency. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 466,196 |
1702.02367 | Iterative Multi-document Neural Attention for Multiple Answer Prediction | People have information needs of varying complexity, which can be solved by an intelligent agent able to answer questions formulated in a proper way, eventually considering user context and preferences. In a scenario in which the user profile can be considered as a question, intelligent agents able to answer questions can be used to find the most relevant answers for a given user. In this work we propose a novel model based on Artificial Neural Networks to answer questions with multiple answers by exploiting multiple facts retrieved from a knowledge base. The model is evaluated on the factoid Question Answering and top-n recommendation tasks of the bAbI Movie Dialog dataset. After assessing the performance of the model on both tasks, we try to define the long-term goal of a conversational recommender system able to interact using natural language and to support users in their information seeking processes in a personalized way. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 67,965 |
2011.03176 | On the Ergodicity, Bias and Asymptotic Normality of Randomized Midpoint
Sampling Method | The randomized midpoint method, proposed by [SL19], has emerged as an optimal discretization procedure for simulating the continuous time Langevin diffusions. Focusing on the case of strong-convex and smooth potentials, in this paper, we analyze several probabilistic properties of the randomized midpoint discretization method for both overdamped and underdamped Langevin diffusions. We first characterize the stationary distribution of the discrete chain obtained with constant step-size discretization and show that it is biased away from the target distribution. Notably, the step-size needs to go to zero to obtain asymptotic unbiasedness. Next, we establish the asymptotic normality for numerical integration using the randomized midpoint method and highlight the relative advantages and disadvantages over other discretizations. Our results collectively provide several insights into the behavior of the randomized midpoint discretization method, including obtaining confidence intervals for numerical integrations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 205,162 |
1909.03560 | Evolving Order and Chaos: Comparing Particle Swarm Optimization and
Genetic Algorithms for Global Coordination of Cellular Automata | We apply two evolutionary search algorithms: Particle Swarm Optimization (PSO) and Genetic Algorithms (GAs) to the design of Cellular Automata (CA) that can perform computational tasks requiring global coordination. In particular, we compare search efficiency for PSO and GAs applied to both the density classification problem and to the novel generation of 'chaotic' CA. Our work furthermore introduces a new variant of PSO, the Binary Global-Local PSO (BGL-PSO). | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | 144,526 |
2501.16786 | Exploring the Role of Explicit Temporal Modeling in Multimodal Large
Language Models for Video Understanding | Applying Multimodal Large Language Models (MLLMs) to video understanding presents significant challenges due to the need to model temporal relations across frames. Existing approaches adopt either implicit temporal modeling, relying solely on the LLM decoder, or explicit temporal modeling, employing auxiliary temporal encoders. To investigate this debate between the two paradigms, we propose the Stackable Temporal Encoder (STE). STE enables flexible explicit temporal modeling with adjustable temporal receptive fields and token compression ratios. Using STE, we systematically compare implicit and explicit temporal modeling across dimensions such as overall performance, token compression effectiveness, and temporal-specific understanding. We also explore STE's design considerations and broader impacts as a plug-in module and in image modalities. Our findings emphasize the critical role of explicit temporal modeling, providing actionable insights to advance video MLLMs. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 528,111 |
2003.09904 | On the snappability and singularity-distance of frameworks with bars and
triangular plates | In a recent article the author presented a method to measure the snapping capability -- shortly called snappability -- of bar-joint frameworks based on the total elastic strain energy by computing the deformation of all bars using Hooke's law and the definition of Cauchy/Engineering strain. Within the paper at hand, we extend this approach to isostatic frameworks composed of bars and triangular plates by using the physical concept of Green-Lagrange strain. An intrinsic pseudometric based on the resulting total elastic strain energy density cannot only be used for evaluating the snappability but also for measuring the distance to the closest singular configuration. The presented methods are demonstrated on the basis of the 3-legged planar parallel manipulator. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 169,180 |
1304.1093 | A New Algorithm for Finding MAP Assignments to Belief Networks | We present a new algorithm for finding maximum a-posterior) (MAP) assignments of values to belief networks. The belief network is compiled into a network consisting only of nodes with boolean (i.e. only 0 or 1) conditional probabilities. The MAP assignment is then found using a best-first search on the resulting network. We argue that, as one would anticipate, the algorithm is exponential for the general case, but only linear in the size of the network for poly trees. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 23,446 |
cs/0604087 | Probabilistic Automata for Computing with Words | Usually, probabilistic automata and probabilistic grammars have crisp symbols as inputs, which can be viewed as the formal models of computing with values. In this paper, we first introduce probabilistic automata and probabilistic grammars for computing with (some special) words in a probabilistic framework, where the words are interpreted as probabilistic distributions or possibility distributions over a set of crisp symbols. By probabilistic conditioning, we then establish a retraction principle from computing with words to computing with values for handling crisp inputs and a generalized extension principle from computing with words to computing with all words for handling arbitrary inputs. These principles show that computing with values and computing with all words can be respectively implemented by computing with some special words. To compare the transition probabilities of two near inputs, we also examine some analytical properties of the transition probability functions of generalized extensions. Moreover, the retractions and the generalized extensions are shown to be equivalence-preserving. Finally, we clarify some relationships among the retractions, the generalized extensions, and the extensions studied recently by Qiu and Wang. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 539,404 |
2209.11228 | NamedMask: Distilling Segmenters from Complementary Foundation Models | The goal of this work is to segment and name regions of images without access to pixel-level labels during training. To tackle this task, we construct segmenters by distilling the complementary strengths of two foundation models. The first, CLIP (Radford et al. 2021), exhibits the ability to assign names to image content but lacks an accessible representation of object structure. The second, DINO (Caron et al. 2021), captures the spatial extent of objects but has no knowledge of object names. Our method, termed NamedMask, begins by using CLIP to construct category-specific archives of images. These images are pseudo-labelled with a category-agnostic salient object detector bootstrapped from DINO, then refined by category-specific segmenters using the CLIP archive labels. Thanks to the high quality of the refined masks, we show that a standard segmentation architecture trained on these archives with appropriate data augmentation achieves impressive semantic segmentation abilities for both single-object and multi-object images. As a result, our proposed NamedMask performs favourably against a range of prior work on five benchmarks including the VOC2012, COCO and large-scale ImageNet-S datasets. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 319,118 |
2203.13906 | Biolink Model: A Universal Schema for Knowledge Graphs in Clinical,
Biomedical, and Translational Science | Within clinical, biomedical, and translational science, an increasing number of projects are adopting graphs for knowledge representation. Graph-based data models elucidate the interconnectedness between core biomedical concepts, enable data structures to be easily updated, and support intuitive queries, visualizations, and inference algorithms. However, knowledge discovery across these "knowledge graphs" (KGs) has remained difficult. Data set heterogeneity and complexity; the proliferation of ad hoc data formats; poor compliance with guidelines on findability, accessibility, interoperability, and reusability; and, in particular, the lack of a universally-accepted, open-access model for standardization across biomedical KGs has left the task of reconciling data sources to downstream consumers. Biolink Model is an open source data model that can be used to formalize the relationships between data structures in translational science. It incorporates object-oriented classification and graph-oriented features. The core of the model is a set of hierarchical, interconnected classes (or categories) and relationships between them (or predicates), representing biomedical entities such as gene, disease, chemical, anatomical structure, and phenotype. The model provides class and edge attributes and associations that guide how entities should relate to one another. Here, we highlight the need for a standardized data model for KGs, describe Biolink Model, and compare it with other models. We demonstrate the utility of Biolink Model in various initiatives, including the Biomedical Data Translator Consortium and the Monarch Initiative, and show how it has supported easier integration and interoperability of biomedical KGs, bringing together knowledge from multiple sources and helping to realize the goals of translational science. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 287,795 |
2502.06813 | Policy Guided Tree Search for Enhanced LLM Reasoning | Despite their remarkable capabilities, large language models often struggle with tasks requiring complex reasoning and planning. While existing approaches like Chain-of-Thought prompting and tree search techniques show promise, they are limited by their reliance on predefined heuristics and computationally expensive exploration strategies. We propose Policy-Guided Tree Search (PGTS), a framework that combines reinforcement learning with structured tree exploration to efficiently navigate reasoning paths. Our key innovation is a learned policy that dynamically decides between expanding, branching, backtracking, or terminating exploration, eliminating the need for manual heuristics or exhaustive search. Experiments across mathematical reasoning, logical deduction, and planning benchmarks demonstrate that PGTS achieves superior reasoning performance while significantly reducing computational costs compared to existing methods. These results establish PGTS as a scalable and effective solution for tackling complex reasoning tasks with LLMs. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 532,259 |
2405.07257 | SPEAK: Speech-Driven Pose and Emotion-Adjustable Talking Head Generation | Most earlier researches on talking face generation have focused on the synchronization of lip motion and speech content. However, head pose and facial emotions are equally important characteristics of natural faces. While audio-driven talking face generation has seen notable advancements, existing methods either overlook facial emotions or are limited to specific individuals and cannot be applied to arbitrary subjects. In this paper, we propose a novel one-shot Talking Head Generation framework (SPEAK) that distinguishes itself from the general Talking Face Generation by enabling emotional and postural control. Specifically, we introduce Inter-Reconstructed Feature Disentanglement (IRFD) module to decouple facial features into three latent spaces. Then we design a face editing module that modifies speech content and facial latent codes into a single latent space. Subsequently, we present a novel generator that employs modified latent codes derived from the editing module to regulate emotional expression, head poses, and speech content in synthesizing facial animations. Extensive trials demonstrate that our method ensures lip synchronization with the audio while enabling decoupled control of facial features, it can generate realistic talking head with coordinated lip motions, authentic facial emotions, and smooth head movements. The demo video is available: https://anonymous.4open.science/r/SPEAK-8A22 | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 453,638 |
2405.13366 | Anticipating Optical Availability in Hybrid RF/FSO Links Using RF
Beacons and Deep Learning | Radio frequency (RF) communications offer reliable but low data rates and energy-inefficient satellite links, while free-space optical (FSO) promises high bandwidth but struggles with disturbances imposed by atmospheric effects. A hybrid RF/FSO architecture aims to achieve optimal reliability along with high data rates for space communications. Accurate prediction of dynamic ground-to-satellite FSO link availability is critical for routing decisions in low-earth orbit constellations. In this paper, we propose a system leveraging ubiquitous RF links to proactively forecast FSO link degradation prior to signal drops below threshold levels. This enables pre-calculation of rerouting to maximally maintain high data rate FSO links throughout the duration of weather effects. We implement a supervised learning model to anticipate FSO attenuation based on the analysis of RF patterns. Through the simulation of a dense lower earth orbit (LEO) satellite constellation, we demonstrate the efficacy of our approach in a simulated satellite network, highlighting the balance between predictive accuracy and prediction duration. An emulated cloud attenuation model is proposed which provides insight into the temporal profiles of RF signals and their correlation to FSO channel dynamics. Our investigation sheds light on the trade-offs between prediction horizon and accuracy arising from RF beacon proximity, achieving a prediction accuracy of 86\% with 16 RF beacons. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 455,911 |
2111.06211 | Model-Based Reinforcement Learning via Stochastic Hybrid Models | Optimal control of general nonlinear systems is a central challenge in automation. Enabled by powerful function approximators, data-driven approaches to control have recently successfully tackled challenging applications. However, such methods often obscure the structure of dynamics and control behind black-box over-parameterized representations, thus limiting our ability to understand closed-loop behavior. This paper adopts a hybrid-system view of nonlinear modeling and control that lends an explicit hierarchical structure to the problem and breaks down complex dynamics into simpler localized units. We consider a sequence modeling paradigm that captures the temporal structure of the data and derive an expectation-maximization (EM) algorithm that automatically decomposes nonlinear dynamics into stochastic piecewise affine models with nonlinear transition boundaries. Furthermore, we show that these time-series models naturally admit a closed-loop extension that we use to extract local polynomial feedback controllers from nonlinear experts via behavioral cloning. Finally, we introduce a novel hybrid relative entropy policy search (Hb-REPS) technique that incorporates the hierarchical nature of hybrid models and optimizes a set of time-invariant piecewise feedback controllers derived from a piecewise polynomial approximation of a global state-value function. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 266,013 |
2412.05265 | Reinforcement Learning: An Overview | This manuscript gives a big-picture, up-to-date overview of the field of (deep) reinforcement learning and sequential decision making, covering value-based RL, policy-gradient methods, model-based methods, and various other topics (including a very brief discussion of RL+LLMs). | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 514,756 |
2112.06643 | On the Dynamics of Hopfield Neural Networks on Unit Quaternions | In this paper, we first address the dynamics of the elegant multi-valued quaternionic Hopfield neural network (MV-QHNN) proposed by Minemoto and collaborators. Contrary to what was expected, we show that the MV-QHNN, as well as one of its variation, does not always come to rest at an equilibrium state under the usual conditions. In fact, we provide simple examples in which the network yields a periodic sequence of quaternionic state vectors. Afterward, we turn our attention to the continuous-valued quaternionic Hopfield neural network (CV-QHNN), which can be derived from the MV-QHNN by means of a limit process. The CV-QHNN can be implemented more easily than the MV-QHNN model. Furthermore, the asynchronous CV-QHNN always settles down into an equilibrium state under the usual conditions. Theoretical issues are all illustrated by examples in this paper. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 271,251 |
2207.12773 | Quiver neural networks | We develop a uniform theoretical approach towards the analysis of various neural network connectivity architectures by introducing the notion of a quiver neural network. Inspired by quiver representation theory in mathematics, this approach gives a compact way to capture elaborate data flows in complex network architectures. As an application, we use parameter space symmetries to prove a lossless model compression algorithm for quiver neural networks with certain non-pointwise activations known as rescaling activations. In the case of radial rescaling activations, we prove that training the compressed model with gradient descent is equivalent to training the original model with projected gradient descent. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 310,112 |
2109.04813 | TACS: Taxonomy Adaptive Cross-Domain Semantic Segmentation | Traditional domain adaptive semantic segmentation addresses the task of adapting a model to a novel target domain under limited or no additional supervision. While tackling the input domain gap, the standard domain adaptation settings assume no domain change in the output space. In semantic prediction tasks, different datasets are often labeled according to different semantic taxonomies. In many real-world settings, the target domain task requires a different taxonomy than the one imposed by the source domain. We therefore introduce the more general taxonomy adaptive cross-domain semantic segmentation (TACS) problem, allowing for inconsistent taxonomies between the two domains. We further propose an approach that jointly addresses the image-level and label-level domain adaptation. On the label-level, we employ a bilateral mixed sampling strategy to augment the target domain, and a relabelling method to unify and align the label spaces. We address the image-level domain gap by proposing an uncertainty-rectified contrastive learning method, leading to more domain-invariant and class-discriminative features. We extensively evaluate the effectiveness of our framework under different TACS settings: open taxonomy, coarse-to-fine taxonomy, and implicitly-overlapping taxonomy. Our approach outperforms the previous state-of-the-art by a large margin, while being capable of adapting to target taxonomies. Our implementation is publicly available at https://github.com/ETHRuiGong/TADA. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 254,547 |
2109.07472 | Single-camera Two-Wavelength Imaging Pyrometry for Melt Pool Temperature
Measurement and Monitoring in Laser Powder Bed Fusion based Additive
Manufacturing | Melt pool (MP) temperature is one of the determining factors and key signatures for the properties of printed components during metal additive manufacturing (AM). The state-of-the art measurement systems are hindered by both the equipment cost and the large-scale data acquisition and processing demands. In this work, we introduce a novel coaxial high-speed single-camera two-wavelength imaging pyrometer (STWIP) system as opposed to the typical utilization of multiple cameras for measuring MP temperature profiles through a laser powder bed fusion (LPBF) process. Developed on a commercial LPBF machine (EOS M290), the STWIP system is demonstrated to be able to quantitatively monitor MP temperature and variation for 50 layers at high framerates (> 30,000 fps) during a print of five standard fatigue specimens. High performance computing is employed to analyze the acquired big data of MP images for determining each MPs average temperature and 2D temperature profile. The MP temperature evolution in the gage section of a fatigue specimen is also examined at a temporal resolution of 1ms by evaluating the derived MP temperatures of the printed samples first, middle and last layers. This paper is first of its kind on monitoring MP temperature distribution and evolution at such a large, detailed scale for longer durations in practical applications. Future work includes MP registration and machine learning of MP-Part Property relations. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 255,540 |
2408.07430 | UAHOI: Uncertainty-aware Robust Interaction Learning for HOI Detection | This paper focuses on Human-Object Interaction (HOI) detection, addressing the challenge of identifying and understanding the interactions between humans and objects within a given image or video frame. Spearheaded by Detection Transformer (DETR), recent developments lead to significant improvements by replacing traditional region proposals by a set of learnable queries. However, despite the powerful representation capabilities provided by Transformers, existing Human-Object Interaction (HOI) detection methods still yield low confidence levels when dealing with complex interactions and are prone to overlooking interactive actions. To address these issues, we propose a novel approach \textsc{UAHOI}, Uncertainty-aware Robust Human-Object Interaction Learning that explicitly estimates prediction uncertainty during the training process to refine both detection and interaction predictions. Our model not only predicts the HOI triplets but also quantifies the uncertainty of these predictions. Specifically, we model this uncertainty through the variance of predictions and incorporate it into the optimization objective, allowing the model to adaptively adjust its confidence threshold based on prediction variance. This integration helps in mitigating the adverse effects of incorrect or ambiguous predictions that are common in traditional methods without any hand-designed components, serving as an automatic confidence threshold. Our method is flexible to existing HOI detection methods and demonstrates improved accuracy. We evaluate \textsc{UAHOI} on two standard benchmarks in the field: V-COCO and HICO-DET, which represent challenging scenarios for HOI detection. Through extensive experiments, we demonstrate that \textsc{UAHOI} achieves significant improvements over existing state-of-the-art methods, enhancing both the accuracy and robustness of HOI detection. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 480,579 |
1712.02754 | On the Duality Between Retinex and Image Dehazing | Image dehazing deals with the removal of undesired loss of visibility in outdoor images due to the presence of fog. Retinex is a color vision model mimicking the ability of the Human Visual System to robustly discount varying illuminations when observing a scene under different spectral lighting conditions. Retinex has been widely explored in the computer vision literature for image enhancement and other related tasks. While these two problems are apparently unrelated, the goal of this work is to show that they can be connected by a simple linear relationship. Specifically, most Retinex-based algorithms have the characteristic feature of always increasing image brightness, which turns them into ideal candidates for effective image dehazing by directly applying Retinex to a hazy image whose intensities have been inverted. In this paper, we give theoretical proof that Retinex on inverted intensities is a solution to the image dehazing problem. Comprehensive qualitative and quantitative results indicate that several classical and modern implementations of Retinex can be transformed into competing image dehazing algorithms performing on pair with more complex fog removal methods, and can overcome some of the main challenges associated with this problem. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 86,332 |
2210.03566 | Automated segmentation and morphological characterization of placental
histology images based on a single labeled image | In this study, a novel method of data augmentation has been presented for the segmentation of placental histological images when the labeled data are scarce. This method generates new realizations of the placenta intervillous morphology while maintaining the general textures and orientations. As a result, a diversified artificial dataset of images is generated that can be used for training deep learning segmentation models. We have observed that on average the presented method of data augmentation led to a 42% decrease in the binary cross-entropy loss of the validation dataset compared to the common approach in the literature. Additionally, the morphology of the intervillous space is studied under the effect of the proposed image reconstruction technique, and the diversity of the artificially generated population is quantified. Due to the high resemblance of the generated images to the real ones, the applications of the proposed method may not be limited to placental histological images, and it is recommended that other types of tissues be investigated in future studies. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 322,089 |
2210.10123 | Interpolated SelectionConv for Spherical Images and Surfaces | We present a new and general framework for convolutional neural network operations on spherical (or omnidirectional) images. Our approach represents the surface as a graph of connected points that doesn't rely on a particular sampling strategy. Additionally, by using an interpolated version of SelectionConv, we can operate on the sphere while using existing 2D CNNs and their weights. Since our method leverages existing graph implementations, it is also fast and can be fine-tuned efficiently. Our method is also general enough to be applied to any surface type, even those that are topologically non-simple. We demonstrate the effectiveness of our technique on the tasks of style transfer and segmentation for spheres as well as stylization for 3D meshes. We provide a thorough ablation study of the performance of various spherical sampling strategies. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 324,792 |
2112.04737 | Asynchronous Semi-Decentralized Federated Edge Learning for
Heterogeneous Clients | Federated edge learning (FEEL) has drawn much attention as a privacy-preserving distributed learning framework for mobile edge networks. In this work, we investigate a novel semi-decentralized FEEL (SD-FEEL) architecture where multiple edge servers collaborate to incorporate more data from edge devices in training. Despite the low training latency enabled by fast edge aggregation, the device heterogeneity in computational resources deteriorates the efficiency. This paper proposes an asynchronous training algorithm for SD-FEEL to overcome this issue, where edge servers can independently set deadlines for the associated client nodes and trigger the model aggregation. To deal with different levels of staleness, we design a staleness-aware aggregation scheme and analyze its convergence performance. Simulation results demonstrate the effectiveness of our proposed algorithm in achieving faster convergence and better learning performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 270,628 |
2408.05438 | Convergence Guarantee of Dynamic Programming for LTL Surrogate Reward | Linear Temporal Logic (LTL) is a formal way of specifying complex objectives for planning problems modeled as Markov Decision Processes (MDPs). The planning problem aims to find the optimal policy that maximizes the satisfaction probability of the LTL objective. One way to solve the planning problem is to use the surrogate reward with two discount factors and dynamic programming, which bypasses the graph analysis used in traditional model-checking. The surrogate reward is designed such that its value function represents the satisfaction probability. However, in some cases where one of the discount factors is set to $1$ for higher accuracy, the computation of the value function using dynamic programming is not guaranteed. This work shows that a multi-step contraction always exists during dynamic programming updates, guaranteeing that the approximate value function will converge exponentially to the true value function. Thus, the computation of satisfaction probability is guaranteed. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 479,786 |
2502.02481 | Multilingual Machine Translation with Open Large Language Models at
Practical Scale: An Empirical Study | Large language models (LLMs) have shown continuously improving multilingual capabilities, and even small-scale open-source models have demonstrated rapid performance enhancement. In this paper, we systematically explore the abilities of open LLMs with less than ten billion parameters to handle multilingual machine translation (MT) tasks. We conduct comprehensive evaluations on six popular LLMs and find that models like Gemma2-9B exhibit impressive multilingual translation capabilities. We then introduce the Parallel-First Monolingual-Second (PFMS) data mixing strategy in the continual pretraining stage to further enhance the MT performance and present GemmaX2-28, a 9B model achieving top-tier multilingual translation performance across 28 languages. Specifically, GemmaX2-28 consistently outperforms the state-of-the-art (SOTA) models such as TowerInstruct and XALMA and achieves competitive performance with Google Translate and GPT-4-turbo. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 530,338 |
2307.06304 | Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and
Resolution | The ubiquitous and demonstrably suboptimal choice of resizing images to a fixed resolution before processing them with computer vision models has not yet been successfully challenged. However, models such as the Vision Transformer (ViT) offer flexible sequence-based modeling, and hence varying input sequence lengths. We take advantage of this with NaViT (Native Resolution ViT) which uses sequence packing during training to process inputs of arbitrary resolutions and aspect ratios. Alongside flexible model usage, we demonstrate improved training efficiency for large-scale supervised and contrastive image-text pretraining. NaViT can be efficiently transferred to standard tasks such as image and video classification, object detection, and semantic segmentation and leads to improved results on robustness and fairness benchmarks. At inference time, the input resolution flexibility can be used to smoothly navigate the test-time cost-performance trade-off. We believe that NaViT marks a departure from the standard, CNN-designed, input and modelling pipeline used by most computer vision models, and represents a promising direction for ViTs. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 379,028 |
2409.16620 | Optimized Monte Carlo Tree Search for Enhanced Decision Making in the
FrozenLake Environment | Monte Carlo Tree Search (MCTS) is a powerful algorithm for solving complex decision-making problems. This paper presents an optimized MCTS implementation applied to the FrozenLake environment, a classic reinforcement learning task characterized by stochastic transitions. The optimization leverages cumulative reward and visit count tables along with the Upper Confidence Bound for Trees (UCT) formula, resulting in efficient learning in a slippery grid world. We benchmark our implementation against other decision-making algorithms, including MCTS with Policy and Q-Learning, and perform a detailed comparison of their performance. The results demonstrate that our optimized approach effectively maximizes rewards and success rates while minimizing convergence time, outperforming baseline methods, especially in environments with inherent randomness. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 491,422 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.