id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2407.05355 | VideoCoT: A Video Chain-of-Thought Dataset with Active Annotation Tool | Multimodal large language models (MLLMs) are flourishing, but mainly focus on images with less attention than videos, especially in sub-fields such as prompt engineering, video chain-of-thought (CoT), and instruction tuning on videos. Therefore, we try to explore the collection of CoT datasets in videos to lead to video OpenQA and improve the reasoning ability of MLLMs. Unfortunately, making such video CoT datasets is not an easy task. Given that human annotation is too cumbersome and expensive, while machine-generated is not reliable due to the hallucination issue, we develop an automatic annotation tool that combines machine and human experts, under the active learning paradigm. Active learning is an interactive strategy between the model and human experts, in this way, the workload of human labeling can be reduced and the quality of the dataset can be guaranteed. With the help of the automatic annotation tool, we strive to contribute three datasets, namely VideoCoT, TopicQA, TopicCoT. Furthermore, we propose a simple but effective benchmark based on the collected datasets, which exploits CoT to maximize the complex reasoning capabilities of MLLMs. Extensive experiments demonstrate the effectiveness our solution. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 470,938 |
2209.01813 | Automatic Estimation of Self-Reported Pain by Trajectory Analysis in the
Manifold of Fixed Rank Positive Semi-Definite Matrices | We propose an automatic method to estimate self-reported pain based on facial landmarks extracted from videos. For each video sequence, we decompose the face into four different regions and the pain intensity is measured by modeling the dynamics of facial movement using the landmarks of these regions. A formulation based on Gram matrices is used for representing the trajectory of landmarks on the Riemannian manifold of symmetric positive semi-definite matrices of fixed rank. A curve fitting algorithm is used to smooth the trajectories and temporal alignment is performed to compute the similarity between the trajectories on the manifold. A Support Vector Regression classifier is then trained to encode extracted trajectories into pain intensity levels consistent with self-reported pain intensity measurement. Finally, a late fusion of the estimation for each region is performed to obtain the final predicted pain level. The proposed approach is evaluated on two publicly available datasets, the UNBCMcMaster Shoulder Pain Archive and the Biovid Heat Pain dataset. We compared our method to the state-of-the-art on both datasets using different testing protocols, showing the competitiveness of the proposed approach. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 316,015 |
2312.01840 | An AI-based solution for the cold start and data sparsity problems in
the recommendation systems | In recent years, the amount of data available on the internet and the number of users who utilize the Internet have increased at an unparalleled pace. The exponential development in the quantity of digital information accessible and the number of Internet users has created the possibility for information overload, impeding fast access to items of interest on the Internet. Information retrieval systems like as Google, DevilFinder, and Altavista have partly overcome this challenge, but prioritizing and customization of information (where a system maps accessible material to a user's interests and preferences) were lacking. This has resulted in a higher-than-ever need for recommender systems. Recommender systems are information filtering systems that address the issue of information overload by filtering important information fragments from a huge volume of dynamically produced data based on the user's interests, favorite things, preferences and ratings on the desired item. Recommender systems can figure out if a person would like an item or not based on their profile. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 412,610 |
1906.03543 | apricot: Submodular selection for data summarization in Python | We present apricot, an open source Python package for selecting representative subsets from large data sets using submodular optimization. The package implements an efficient greedy selection algorithm that offers strong theoretical guarantees on the quality of the selected set. Two submodular set functions are implemented in apricot: facility location, which is broadly applicable but requires memory quadratic in the number of examples in the data set, and a feature-based function that is less broadly applicable but can scale to millions of examples. Apricot is extremely efficient, using both algorithmic speedups such as the lazy greedy algorithm and code optimizers such as numba. We demonstrate the use of subset selection by training machine learning models to comparable accuracy using either the full data set or a representative subset thereof. This paper presents an explanation of submodular selection, an overview of the features in apricot, and an application to several data sets. The code and tutorial Jupyter notebooks are available at https://github.com/jmschrei/apricot | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 134,411 |
1402.6133 | Bayesian Sample Size Determination of Vibration Signals in Machine
Learning Approach to Fault Diagnosis of Roller Bearings | Sample size determination for a data set is an important statistical process for analyzing the data to an optimum level of accuracy and using minimum computational work. The applications of this process are credible in every domain which deals with large data sets and high computational work. This study uses Bayesian analysis for determination of minimum sample size of vibration signals to be considered for fault diagnosis of a bearing using pre-defined parameters such as the inverse standard probability and the acceptable margin of error. Thus an analytical formula for sample size determination is introduced. The fault diagnosis of the bearing is done using a machine learning approach using an entropy-based J48 algorithm. The following method will help researchers involved in fault diagnosis to determine minimum sample size of data for analysis for a good statistical stability and precision. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 31,153 |
2411.10163 | Compound-QA: A Benchmark for Evaluating LLMs on Compound Questions | Large language models (LLMs) demonstrate remarkable performance across various tasks, prompting researchers to develop diverse evaluation benchmarks. However, existing benchmarks typically measure the ability of LLMs to respond to individual questions, neglecting the complex interactions in real-world applications. In this paper, we introduce Compound Question Synthesis (CQ-Syn) to create the Compound-QA benchmark, focusing on compound questions with multiple sub-questions. This benchmark is derived from existing QA datasets, annotated with proprietary LLMs and verified by humans for accuracy. It encompasses five categories: Factual-Statement, Cause-and-Effect, Hypothetical-Analysis, Comparison-and-Selection, and Evaluation-and-Suggestion. It evaluates the LLM capability in terms of three dimensions including understanding, reasoning, and knowledge. Our assessment of eight open-source LLMs using Compound-QA reveals distinct patterns in their responses to compound questions, which are significantly poorer than those to non-compound questions. Additionally, we investigate various methods to enhance LLMs performance on compound questions. The results indicate that these approaches significantly improve the models' comprehension and reasoning abilities on compound questions. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 508,524 |
2411.04319 | Towards Optimizing SQL Generation via LLM Routing | Text-to-SQL enables users to interact with databases through natural language, simplifying access to structured data. Although highly capable large language models (LLMs) achieve strong accuracy for complex queries, they incur unnecessary latency and dollar cost for simpler ones. In this paper, we introduce the first LLM routing approach for Text-to-SQL, which dynamically selects the most cost-effective LLM capable of generating accurate SQL for each query. We present two routing strategies (score- and classification-based) that achieve accuracy comparable to the most capable LLM while reducing costs. We design the routers for ease of training and efficient inference. In our experiments, we highlight a practical and explainable accuracy-cost trade-off on the BIRD dataset. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | 506,220 |
2407.02271 | Improving Explainability of Softmax Classifiers Using a Prototype-Based
Joint Embedding Method | We propose a prototype-based approach for improving explainability of softmax classifiers that provides an understandable prediction confidence, generated through stochastic sampling of prototypes, and demonstrates potential for out of distribution detection (OOD). By modifying the model architecture and training to make predictions using similarities to any set of class examples from the training dataset, we acquire the ability to sample for prototypical examples that contributed to the prediction, which provide an instance-based explanation for the model's decision. Furthermore, by learning relationships between images from the training dataset through relative distances within the model's latent space, we obtain a metric for uncertainty that is better able to detect out of distribution data than softmax confidence. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 469,655 |
1909.08243 | Quantified Constraint Handling Rules | We shift the QCSP (Quantified Constraint Satisfaction Problems) framework to the QCHR (Quantified Constraint Handling Rules) framework by enabling dynamic binder and access to user-defined constraints. QCSP offers a natural framework to express PSPACE problems as finite two-players games. But to define a QCSP model, the binder must be formerly known and cannot be built dynamically even if the worst case won't occur. To overcome this issue, we define the new QCHR formalism that allows to build the binder dynamically during the solving. Our QCHR models exhibit state-of-the-art performances on static binder and outperforms previous QCSP approaches when the binder is dynamic. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 145,921 |
2309.16105 | Differentially Private Secure Multiplication: Hiding Information in the
Rubble of Noise | We consider the problem of private distributed multi-party multiplication. It is well-established that Shamir secret-sharing coding strategies can enable perfect information-theoretic privacy in distributed computation via the celebrated algorithm of Ben Or, Goldwasser and Wigderson (the "BGW algorithm"). However, perfect privacy and accuracy require an honest majority, that is, $N \geq 2t+1$ compute nodes are required to ensure privacy against any $t$ colluding adversarial nodes. By allowing for some controlled amount of information leakage and approximate multiplication instead of exact multiplication, we study coding schemes for the setting where the number of honest nodes can be a minority, that is $N< 2t+1.$ We develop a tight characterization privacy-accuracy trade-off for cases where $N < 2t+1$ by measuring information leakage using {differential} privacy instead of perfect privacy, and using the mean squared error metric for accuracy. A novel technical aspect is an intricately layered noise distribution that merges ideas from differential privacy and Shamir secret-sharing at different layers. | false | false | false | false | false | false | true | false | false | true | false | false | true | false | false | false | false | true | 395,215 |
2402.10200 | Chain-of-Thought Reasoning Without Prompting | In enhancing the reasoning capabilities of large language models (LLMs), prior research primarily focuses on specific prompting techniques such as few-shot or zero-shot chain-of-thought (CoT) prompting. These methods, while effective, often involve manually intensive prompt engineering. Our study takes a novel approach by asking: Can LLMs reason effectively without prompting? Our findings reveal that, intriguingly, CoT reasoning paths can be elicited from pre-trained LLMs by simply altering the \textit{decoding} process. Rather than conventional greedy decoding, we investigate the top-$k$ alternative tokens, uncovering that CoT paths are frequently inherent in these sequences. This approach not only bypasses the confounders of prompting but also allows us to assess the LLMs' \textit{intrinsic} reasoning abilities. Moreover, we observe that the presence of a CoT in the decoding path correlates with a higher confidence in the model's decoded answer. This confidence metric effectively differentiates between CoT and non-CoT paths. Extensive empirical studies on various reasoning benchmarks show that the proposed CoT-decoding effectively elicits reasoning capabilities from language models, which were previously obscured by standard greedy decoding. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 429,865 |
2311.08258 | Unprecedented reach and rich online journeys drive hate and extremism
globally | Hate and extremism cannot be controlled globally without understanding how they operate at scale. Both have escalated dramatically during the Israel-Hamas and Ukraine-Russia wars. Here we show how the online hate-extremism system is now operating at unprecedented scale across 26 social media platforms of all sizes, audience demographics, and geographic locations; and we analyze individuals' journeys through it. This new picture contradicts notions of rabbit-hole activity at the fringe of the Internet. Instead, it shows that hate-extremism support now enjoys a direct link to more than a billion of the general global population, and that newcomers now enjoy a rich variety of online journey experiences during which they get to mingle with experienced violent actors, discuss topics from diverse news sources, and learn to collectively adapt in order to bypass platform shutdowns. Our results mean that law enforcement must expect future mass shooters to have increasingly hard-to-understand online journeys; that new E.U. laws will fall short because the combined impact of many smaller, lesser-known platforms outstrips larger ones like Twitter; and that the current global hate-extremism infrastructure will become increasingly robust in 2024 and beyond. Fortunately, it also reveals a new opportunity for system-wide control akin to adaptive vs. extinction treatments for cancer. | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 407,650 |
1509.05096 | Out of vocabulary words decrease, running texts prevail and hashtags
coalesce: Twitter as an evolving sociolinguistic system | Twitter is one of the most popular social media. Due to the ease of availability of data, Twitter is used significantly for research purposes. Twitter is known to evolve in many aspects from what it was at its birth; nevertheless, how it evolved its own linguistic style is still relatively unknown. In this paper, we study the evolution of various sociolinguistic aspects of Twitter over large time scales. To the best of our knowledge, this is the first comprehensive study on the evolution of such aspects of this OSN. We performed quantitative analysis both on the word level as well as on the hashtags since it is perhaps one of the most important linguistic units of this social media. We studied the (in)formality aspects of the linguistic styles in Twitter and find that it is neither fully formal nor completely informal; while on one hand, we observe that Out-Of-Vocabulary words are decreasing over time (pointing to a formal style), on the other hand it is quite evident that whitespace usage is getting reduced with a huge prevalence of running texts (pointing to an informal style). We also analyze and propose quantitative reasons for repetition and coalescing of hashtags in Twitter. We believe that such phenomena may be strongly tied to different evolutionary aspects of human languages. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 47,003 |
2306.09666 | A Smooth Binary Mechanism for Efficient Private Continual Observation | In privacy under continual observation we study how to release differentially private estimates based on a dataset that evolves over time. The problem of releasing private prefix sums of $x_1,x_2,x_3,\dots \in\{0,1\}$ (where the value of each $x_i$ is to be private) is particularly well-studied, and a generalized form is used in state-of-the-art methods for private stochastic gradient descent (SGD). The seminal binary mechanism privately releases the first $t$ prefix sums with noise of variance polylogarithmic in $t$. Recently, Henzinger et al. and Denisov et al. showed that it is possible to improve on the binary mechanism in two ways: The variance of the noise can be reduced by a (large) constant factor, and also made more even across time steps. However, their algorithms for generating the noise distribution are not as efficient as one would like in terms of computation time and (in particular) space. We address the efficiency problem by presenting a simple alternative to the binary mechanism in which 1) generating the noise takes constant average time per value, 2) the variance is reduced by a factor about 4 compared to the binary mechanism, and 3) the noise distribution at each step is identical. Empirically, a simple Python implementation of our approach outperforms the running time of the approach of Henzinger et al., as well as an attempt to improve their algorithm using high-performance algorithms for multiplication with Toeplitz matrices. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 373,921 |
1805.01811 | Failure Prediction for Autonomous Driving | The primary focus of autonomous driving research is to improve driving accuracy. While great progress has been made, state-of-the-art algorithms still fail at times. Such failures may have catastrophic consequences. It therefore is important that automated cars foresee problems ahead as early as possible. This is also of paramount importance if the driver will be asked to take over. We conjecture that failures do not occur randomly. For instance, driving models may fail more likely at places with heavy traffic, at complex intersections, and/or under adverse weather/illumination conditions. This work presents a method to learn to predict the occurrence of these failures, i.e. to assess how difficult a scene is to a given driving model and to possibly give the human driver an early headsup. A camera-based driving model is developed and trained over real driving datasets. The discrepancies between the model's predictions and the human `ground-truth' maneuvers were then recorded, to yield the `failure' scores. Experimental results show that the failure score can indeed be learned and predicted. Thus, our prediction method is able to improve the overall safety of an automated driving model by alerting the human driver timely, leading to better human-vehicle collaborative driving. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 96,707 |
2006.05622 | P-ADMMiRNN: Training RNN with Stable Convergence via An Efficient and
Paralleled ADMM Approach | It is hard to train Recurrent Neural Network (RNN) with stable convergence and avoid gradient vanishing and exploding problems, as the weights in the recurrent unit are repeated from iteration to iteration. Moreover, RNN is sensitive to the initialization of weights and bias, which brings difficulties in training. The Alternating Direction Method of Multipliers (ADMM) has become a promising algorithm to train neural networks beyond traditional stochastic gradient algorithms with the gradient-free features and immunity to unsatisfactory conditions. However, ADMM could not be applied to train RNN directly since the state in the recurrent unit is repetitively updated over timesteps. Therefore, this work builds a new framework named ADMMiRNN upon the unfolded form of RNN to address the above challenges simultaneously. We also provide novel update rules and theoretical convergence analysis. We explicitly specify essential update rules in the iterations of ADMMiRNN with constructed approximation techniques and solutions to each sub-problem instead of vanilla ADMM. Numerical experiments are conducted on MNIST, IMDb, and text classification tasks. ADMMiRNN achieves convergent results and outperforms the compared baselines. Furthermore, ADMMiRNN trains RNN more stably without gradient vanishing or exploding than stochastic gradient algorithms. We also provide a distributed paralleled algorithm regarding ADMMiRNN, named P-ADMMiRNN, including Synchronous Parallel ADMMiRNN (SP-ADMMiRNN) and Asynchronous Parallel ADMMiRNN (AP-ADMMiRNN), which is the first to train RNN with ADMM in an asynchronous parallel manner. The source code is publicly available. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 181,142 |
2309.15830 | OrthoPlanes: A Novel Representation for Better 3D-Awareness of GANs | We present a new method for generating realistic and view-consistent images with fine geometry from 2D image collections. Our method proposes a hybrid explicit-implicit representation called \textbf{OrthoPlanes}, which encodes fine-grained 3D information in feature maps that can be efficiently generated by modifying 2D StyleGANs. Compared to previous representations, our method has better scalability and expressiveness with clear and explicit information. As a result, our method can handle more challenging view-angles and synthesize articulated objects with high spatial degree of freedom. Experiments demonstrate that our method achieves state-of-the-art results on FFHQ and SHHQ datasets, both quantitatively and qualitatively. Project page: \url{https://orthoplanes.github.io/}. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 395,135 |
2204.02662 | Accelerating Backward Aggregation in GCN Training with Execution Path
Preparing on GPUs | The emerging Graph Convolutional Network (GCN) has now been widely used in many domains, and it is challenging to improve the efficiencies of applications by accelerating the GCN trainings. For the sparsity nature and exploding scales of input real-world graphs, state-of-the-art GCN training systems (e.g., GNNAdvisor) employ graph processing techniques to accelerate the message exchanging (i.e. aggregations) among the graph vertices. Nevertheless, these systems treat both the aggregation stages of forward and backward propagation phases as all-active graph processing procedures that indiscriminately conduct computation on all vertices of an input graph. In this paper, we first point out that in a GCN training problem with a given training set, the aggregation stages of its backward propagation phase (called as backward aggregations in this paper) can be converted to partially-active graph processing procedures, which conduct computation on only partial vertices of the input graph. By leveraging such a finding, we propose an execution path preparing method that collects and coalesces the data used during backward propagations of GCN training conducted on GPUs. The experimental results show that compared with GNNAdvisor, our approach improves the performance of the backward aggregation of GCN trainings on typical real-world graphs by 1.48x~5.65x. Moreover, the execution path preparing can be conducted either before the training (during preprocessing) or on-the-fly with the training. When used during preprocessing, our approach improves the overall GCN training by 1.05x~1.37x. And when used on-the-fly, our approach improves the overall GCN training by 1.03x~1.35x. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 290,042 |
2411.12989 | Data Watermarking for Sequential Recommender Systems | In the era of large foundation models, data has become a crucial component for building high-performance AI systems. As the demand for high-quality and large-scale data continues to rise, data copyright protection is attracting increasing attention. In this work, we explore the problem of data watermarking for sequential recommender systems, where a watermark is embedded into the target dataset and can be detected in models trained on that dataset. We address two specific challenges: dataset watermarking, which protects the ownership of the entire dataset, and user watermarking, which safeguards the data of individual users. We systematically define these problems and present a method named DWRS to address them. Our approach involves randomly selecting unpopular items to create a watermark sequence, which is then inserted into normal users' interaction sequences. Extensive experiments on five representative sequential recommendation models and three benchmark datasets demonstrate the effectiveness of DWRS in protecting data copyright while preserving model utility. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 509,627 |
1911.06509 | Improved algorithm for neuronal ensemble inference by Monte Carlo method | Neuronal ensemble inference is one of the significant problems in the study of biological neural networks. Various methods have been proposed for ensemble inference from their activity data taken experimentally. Here we focus on Bayesian inference approach for ensembles with generative model, which was proposed in recent work. However, this method requires large computational cost, and the result sometimes gets stuck in bad local maximum solution of Bayesian inference. In this work, we give improved Bayesian inference algorithm for these problems. We modify ensemble generation rule in Markov chain Monte Carlo method, and introduce the idea of simulated annealing for hyperparameter control. We also compare the performance of ensemble inference between our algorithm and the original one. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 153,560 |
1912.02153 | Walking on the Edge: Fast, Low-Distortion Adversarial Examples | Adversarial examples of deep neural networks are receiving ever increasing attention because they help in understanding and reducing the sensitivity to their input. This is natural given the increasing applications of deep neural networks in our everyday lives. When white-box attacks are almost always successful, it is typically only the distortion of the perturbations that matters in their evaluation. In this work, we argue that speed is important as well, especially when considering that fast attacks are required by adversarial training. Given more time, iterative methods can always find better solutions. We investigate this speed-distortion trade-off in some depth and introduce a new attack called boundary projection (BP) that improves upon existing methods by a large margin. Our key idea is that the classification boundary is a manifold in the image space: we therefore quickly reach the boundary and then optimize distortion on this manifold. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 156,270 |
2006.09701 | Adversarial Defense by Latent Style Transformations | Machine learning models have demonstrated vulnerability to adversarial attacks, more specifically misclassification of adversarial examples. In this paper, we investigate an attack-agnostic defense against adversarial attacks on high-resolution images by detecting suspicious inputs. The intuition behind our approach is that the essential characteristics of a normal image are generally consistent with non-essential style transformations, e.g., slightly changing the facial expression of human portraits. In contrast, adversarial examples are generally sensitive to such transformations. In our approach to detect adversarial instances, we propose an in\underline{V}ertible \underline{A}utoencoder based on the \underline{S}tyleGAN2 generator via \underline{A}dversarial training (VASA) to inverse images to disentangled latent codes that reveal hierarchical styles. We then build a set of edited copies with non-essential style transformations by performing latent shifting and reconstruction, based on the correspondences between latent codes and style transformations. The classification-based consistency of these edited copies is used to distinguish adversarial instances. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 182,631 |
1702.04342 | BranchHull: Convex bilinear inversion from the entrywise product of
signals with known signs | We consider the bilinear inverse problem of recovering two vectors, $x$ and $w$, in $\mathbb{R}^L$ from their entrywise product. For the case where the vectors have known signs and belong to known subspaces, we introduce the convex program BranchHull, which is posed in the natural parameter space that does not require an approximate solution or initialization in order to be stated or solved. Under the structural assumptions that $x$ and $w$ are members of known $K$ and $N$ dimensional random subspaces, we present a recovery guarantee for the noiseless case and a noisy case. In the noiseless case, we prove that the BranchHull recovers $x$ and $w$ up to the inherent scaling ambiguity with high probability when $L\ \gg\ 2(K+N)$. The analysis provides a precise upper bound on the coefficient for the sample complexity. In a noisy case, we show that with high probability the BranchHull is robust to small dense noise when $L = \Omega(K+N)$. BranchHull is motivated by the sweep distortion removal task in dielectric imaging, where one of the signals is a nonnegative reflectivity, and the other signal lives in a known wavelet subspace. Additional potential applications are blind deconvolution and self-calibration. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 68,247 |
2310.02029 | Between accurate prediction and poor decision making: the AI/ML gap | Intelligent agents rely on AI/ML functionalities to predict the consequence of possible actions and optimise the policy. However, the effort of the research community in addressing prediction accuracy has been so intense (and successful) that it created the illusion that the more accurate the learner prediction (or classification) the better would have been the final decision. Now, such an assumption is valid only if the (human or artificial) decision maker has complete knowledge of the utility of the possible actions. This paper argues that AI/ML community has taken so far a too unbalanced approach by devoting excessive attention to the estimation of the state (or target) probability to the detriment of accurate and reliable estimations of the utility. In particular, few evidence exists about the impact of a wrong utility assessment on the resulting expected utility of the decision strategy. This situation is creating a substantial gap between the expectations and the effective impact of AI solutions, as witnessed by recent criticisms and emphasised by the regulatory legislative efforts. This paper aims to study this gap by quantifying the sensitivity of the expected utility to the utility uncertainty and comparing it to the one due to probability estimation. Theoretical and simulated results show that an inaccurate utility assessment may as (and sometimes) more harmful than a poor probability estimation. The final recommendation to the community is then to undertake a focus shift from a pure accuracy-driven (or obsessed) approach to a more utility-aware methodology. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 396,681 |
2304.09919 | The eBible Corpus: Data and Model Benchmarks for Bible Translation for
Low-Resource Languages | Efficiently and accurately translating a corpus into a low-resource language remains a challenge, regardless of the strategies employed, whether manual, automated, or a combination of the two. Many Christian organizations are dedicated to the task of translating the Holy Bible into languages that lack a modern translation. Bible translation (BT) work is currently underway for over 3000 extremely low resource languages. We introduce the eBible corpus: a dataset containing 1009 translations of portions of the Bible with data in 833 different languages across 75 language families. In addition to a BT benchmarking dataset, we introduce model performance benchmarks built on the No Language Left Behind (NLLB) neural machine translation (NMT) models. Finally, we describe several problems specific to the domain of BT and consider how the established data and model benchmarks might be used for future translation efforts. For a BT task trained with NLLB, Austronesian and Trans-New Guinea language families achieve 35.1 and 31.6 BLEU scores respectively, which spurs future innovations for NMT for low-resource languages in Papua New Guinea. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 359,221 |
1807.09341 | Learning Plannable Representations with Causal InfoGAN | In recent years, deep generative models have been shown to 'imagine' convincing high-dimensional observations such as images, audio, and even video, learning directly from raw data. In this work, we ask how to imagine goal-directed visual plans -- a plausible sequence of observations that transition a dynamical system from its current configuration to a desired goal state, which can later be used as a reference trajectory for control. We focus on systems with high-dimensional observations, such as images, and propose an approach that naturally combines representation learning and planning. Our framework learns a generative model of sequential observations, where the generative process is induced by a transition in a low-dimensional planning model, and an additional noise. By maximizing the mutual information between the generated observations and the transition in the planning model, we obtain a low-dimensional representation that best explains the causal nature of the data. We structure the planning model to be compatible with efficient planning algorithms, and we propose several such models based on either discrete or continuous states. Finally, to generate a visual plan, we project the current and goal observations onto their respective states in the planning model, plan a trajectory, and then use the generative model to transform the trajectory to a sequence of observations. We demonstrate our method on imagining plausible visual plans of rope manipulation. | false | false | false | false | true | false | true | true | false | false | false | true | false | false | false | true | false | false | 103,700 |
2011.08723 | Robust Stability of Suboptimal Moving Horizon Estimation using an
Observer-Based Candidate Solution | In this paper, we propose a suboptimal moving horizon estimator for nonlinear systems. For the stability analysis we transfer the "feasibility-implies-stability/robustness" paradigm from model predictive control to the context of moving horizon estimation in the following sense: Using a suitably defined, feasible candidate solution based on the trajectory of an auxiliary observer, robust stability of the proposed suboptimal estimator is inherited independently of the horizon length and even if no optimization is performed. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 206,971 |
2408.05636 | Speculative Diffusion Decoding: Accelerating Language Generation through
Diffusion | Speculative decoding has emerged as a widely adopted method to accelerate large language model inference without sacrificing the quality of the model outputs. While this technique has facilitated notable speed improvements by enabling parallel sequence verification, its efficiency remains inherently limited by the reliance on incremental token generation in existing draft models. To overcome this limitation, this paper proposes an adaptation of speculative decoding which uses discrete diffusion models to generate draft sequences. This allows parallelization of both the drafting and verification steps, providing significant speedups to the inference process. Our proposed approach, $\textit{Speculative Diffusion Decoding (SpecDiff)}$, is validated on standard language generation benchmarks and empirically demonstrated to provide up to 7.2x speedups over standard generation processes and up to 1.75x speedups over existing speculative decoding approaches. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 479,871 |
2401.07042 | GEML: A Grammar-based Evolutionary Machine Learning Approach for
Design-Pattern Detection | Design patterns (DPs) are recognised as a good practice in software development. However, the lack of appropriate documentation often hampers traceability, and their benefits are blurred among thousands of lines of code. Automatic methods for DP detection have become relevant but are usually based on the rigid analysis of either software metrics or specific properties of the source code. We propose GEML, a novel detection approach based on evolutionary machine learning using software properties of diverse nature. Firstly, GEML makes use of an evolutionary algorithm to extract those characteristics that better describe the DP, formulated in terms of human-readable rules, whose syntax is conformant with a context-free grammar. Secondly, a rule-based classifier is built to predict whether new code contains a hidden DP implementation. GEML has been validated over five DPs taken from a public repository recurrently adopted by machine learning studies. Then, we increase this number up to 15 diverse DPs, showing its effectiveness and robustness in terms of detection capability. An initial parameter study served to tune a parameter setup whose performance guarantees the general applicability of this approach without the need to adjust complex parameters to a specific pattern. Finally, a demonstration tool is also provided. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 421,396 |
2409.18473 | Efficient Top-k s-Biplexes Search over Large Bipartite Graphs | In a bipartite graph, a subgraph is an $s$-biplex if each vertex of the subgraph is adjacent to all but at most $s$ vertices on the opposite set. The enumeration of $s$-biplexes from a given graph is a fundamental problem in bipartite graph analysis. However, in real-world data engineering, finding all $s$-biplexes is neither necessary nor computationally affordable. A more realistic problem is to identify some of the largest $s$-biplexes from the large input graph. We formulate the problem as the {\em top-$k$ $s$-biplex search (TBS) problem}, which aims to find the top-$k$ maximal $s$-biplexes with the most vertices, where $k$ is an input parameter. We prove that the TBS problem is NP-hard for any fixed $k\ge 1$. Then, we propose a branching algorithm, named MVBP, that breaks the simple $2^n$ enumeration algorithm. Furthermore, from a practical perspective, we investigate three techniques to improve the performance of MVBP: 2-hop decomposition, single-side bounds, and progressive search. Complexity analysis shows that the improved algorithm, named FastMVBP, has a running time $O^*(\gamma_s^{d_2})$, where $\gamma_s<2$, and $d_2$ is a parameter much smaller than the number of vertex in the sparse real-world graphs, e.g. $d_2$ is only $67$ in the AmazonRatings dataset which has more than $3$ million vertices. Finally, we conducted extensive experiments on eight real-world and synthetic datasets to demonstrate the empirical efficiency of the proposed algorithms. In particular, FastMVBP outperforms the benchmark algorithms by up to three orders of magnitude in several instances. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | 492,274 |
2102.08585 | Towards Faithfulness in Open Domain Table-to-text Generation from an
Entity-centric View | In open domain table-to-text generation, we notice that the unfaithful generation usually contains hallucinated content which can not be aligned to any input table record. We thus try to evaluate the generation faithfulness with two entity-centric metrics: table record coverage and the ratio of hallucinated entities in text, both of which are shown to have strong agreement with human judgements. Then based on these metrics, we quantitatively analyze the correlation between training data quality and generation fidelity which indicates the potential usage of entity information in faithful generation. Motivated by these findings, we propose two methods for faithful generation: 1) augmented training by incorporating the auxiliary entity information, including both an augmented plan-based model and an unsupervised model and 2) training instance selection based on faithfulness ranking. We show these approaches improve generation fidelity in both full dataset setting and few shot learning settings by both automatic and human evaluations. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 220,511 |
1603.05486 | A flexible state space model for learning nonlinear dynamical systems | We consider a nonlinear state-space model with the state transition and observation functions expressed as basis function expansions. The coefficients in the basis function expansions are learned from data. Using a connection to Gaussian processes we also develop priors on the coefficients, for tuning the model flexibility and to prevent overfitting to data, akin to a Gaussian process state-space model. The priors can alternatively be seen as a regularization, and helps the model in generalizing the data without sacrificing the richness offered by the basis function expansion. To learn the coefficients and other unknown parameters efficiently, we tailor an algorithm using state-of-the-art sequential Monte Carlo methods, which comes with theoretical guarantees on the learning. Our approach indicates promising results when evaluated on a classical benchmark as well as real data. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 53,365 |
2409.18989 | SC-Phi2: A Fine-tuned Small Language Model for StarCraft II
Macromanagement Tasks | This paper introduces SC-Phi2, a fine-tuned StarCraft II small language model for macromanagement tasks. Small language models, like Phi2, Gemma, and DistilBERT, are streamlined versions of large language models (LLMs) with fewer parameters that require less power and memory to run. To teach Microsoft's Phi2 model about StarCraft, we create a new SC2 text dataset with information about StarCraft races, roles, and actions and use it to fine-tune Phi-2 with self-supervised learning. We pair this language model with a Vision Transformer (ViT) from the pre-trained BLIP-2 (Bootstrapping Language Image Pre-training) model, fine-tuning it on the MSC replay dataset. This enables us to construct dynamic prompts that include visual game state information. Unlike the large models used in StarCraft LLMs such as GPT-3.5, Phi2 is trained primarily on textbook data and contains little inherent knowledge of StarCraft II beyond what is provided by our training process. By using LoRA (Low-rank Adaptation) and quantization, our model can be trained on a single GPU. We demonstrate that our model performs well at micromanagement tasks such as build order and global state prediction with a small number of parameters. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 492,492 |
1811.12642 | AI Neurotechnology for Aging Societies -- Task-load and Dementia EEG
Digital Biomarker Development Using Information Geometry Machine Learning
Methods | Dementia and especially Alzheimer's disease (AD) are the most common causes of cognitive decline in elderly people. A spread of the above mentioned mental health problems in aging societies is causing a significant medical and economic burden in many countries around the world. According to a recent World Health Organization (WHO) report, it is approximated that currently, worldwide, about 47 million people live with a dementia spectrum of neurocognitive disorders. This number is expected to triple by 2050, which calls for possible application of AI-based technologies to support an early screening for preventive interventions and a subsequent mental wellbeing monitoring as well as maintenance with so-called digital-pharma or beyond a pill therapeutical approaches. This paper discusses our attempt and preliminary results of brainwave (EEG) techniques to develop digital biomarkers for dementia progress detection and monitoring. We present an information geometry-based classification approach for automatic EEG-derived event related responses (ERPs) discrimination of low versus high task-load auditory or tactile stimuli recognition, of which amplitude and latency variabilities are similar to those in dementia. The discussed approach is a step forward to develop AI, and especially machine learning (ML) approaches, for the subsequent application to mild-cognitive impairment (MCI) and AD diagnostics. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 115,064 |
2011.12747 | Symmetry-Aware Actor-Critic for 3D Molecular Design | Automating molecular design using deep reinforcement learning (RL) has the potential to greatly accelerate the search for novel materials. Despite recent progress on leveraging graph representations to design molecules, such methods are fundamentally limited by the lack of three-dimensional (3D) information. In light of this, we propose a novel actor-critic architecture for 3D molecular design that can generate molecular structures unattainable with previous approaches. This is achieved by exploiting the symmetries of the design process through a rotationally covariant state-action representation based on a spherical harmonics series expansion. We demonstrate the benefits of our approach on several 3D molecular design tasks, where we find that building in such symmetries significantly improves generalization and the quality of generated molecules. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 208,260 |
1406.1055 | Lattice Codes for the Binary Deletion Channel | The construction of deletion codes for the Levenshtein metric is reduced to the construction of codes over the integers for the Manhattan metric by run length coding. The latter codes are constructed by expurgation of translates of lattices. These lattices, in turn, are obtained from Construction~A applied to binary codes and $\Z_4-$codes. A lower bound on the size of our codes for the Manhattan distance are obtained through generalized theta series of the corresponding lattices. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 33,593 |
2107.07582 | Prediction of Blood Lactate Values in Critically Ill Patients: A
Retrospective Multi-center Cohort Study | Purpose. Elevations in initially obtained serum lactate levels are strong predictors of mortality in critically ill patients. Identifying patients whose serum lactate levels are more likely to increase can alert physicians to intensify care and guide them in the frequency of tending the blood test. We investigate whether machine learning models can predict subsequent serum lactate changes. Methods. We investigated serum lactate change prediction using the MIMIC-III and eICU-CRD datasets in internal as well as external validation of the eICU cohort on the MIMIC-III cohort. Three subgroups were defined based on the initial lactate levels: i) normal group (<2 mmol/L), ii) mild group (2-4 mmol/L), and iii) severe group (>4 mmol/L). Outcomes were defined based on increase or decrease of serum lactate levels between the groups. We also performed sensitivity analysis by defining the outcome as lactate change of >10% and furthermore investigated the influence of the time interval between subsequent lactate measurements on predictive performance. Results. The LSTM models were able to predict deterioration of serum lactate values of MIMIC-III patients with an AUC of 0.77 (95% CI 0.762-0.771) for the normal group, 0.77 (95% CI 0.768-0.772) for the mild group, and 0.85 (95% CI 0.840-0.851) for the severe group, with a slightly lower performance in the external validation. Conclusion. The LSTM demonstrated good discrimination of patients who had deterioration in serum lactate levels. Clinical studies are needed to evaluate whether utilization of a clinical decision support tool based on these results could positively impact decision-making and patient outcomes. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 246,464 |
1109.3714 | High-dimensional regression with noisy and missing data: Provable
guarantees with nonconvexity | Although the standard formulations of prediction problems involve fully-observed and noiseless data drawn in an i.i.d. manner, many applications involve noisy and/or missing data, possibly involving dependence, as well. We study these issues in the context of high-dimensional sparse linear regression, and propose novel estimators for the cases of noisy, missing and/or dependent data. Many standard approaches to noisy or missing data, such as those using the EM algorithm, lead to optimization problems that are inherently nonconvex, and it is difficult to establish theoretical guarantees on practical algorithms. While our approach also involves optimizing nonconvex programs, we are able to both analyze the statistical error associated with any global optimum, and more surprisingly, to prove that a simple algorithm based on projected gradient descent will converge in polynomial time to a small neighborhood of the set of all global minimizers. On the statistical side, we provide nonasymptotic bounds that hold with high probability for the cases of noisy, missing and/or dependent data. On the computational side, we prove that under the same types of conditions required for statistical consistency, the projected gradient descent algorithm is guaranteed to converge at a geometric rate to a near-global minimizer. We illustrate these theoretical predictions with simulations, showing close agreement with the predicted scalings. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 12,204 |
2411.15296 | MME-Survey: A Comprehensive Survey on Evaluation of Multimodal LLMs | As a prominent direction of Artificial General Intelligence (AGI), Multimodal Large Language Models (MLLMs) have garnered increased attention from both industry and academia. Building upon pre-trained LLMs, this family of models further develops multimodal perception and reasoning capabilities that are impressive, such as writing code given a flow chart or creating stories based on an image. In the development process, evaluation is critical since it provides intuitive feedback and guidance on improving models. Distinct from the traditional train-eval-test paradigm that only favors a single task like image classification, the versatility of MLLMs has spurred the rise of various new benchmarks and evaluation methods. In this paper, we aim to present a comprehensive survey of MLLM evaluation, discussing four key aspects: 1) the summarised benchmarks types divided by the evaluation capabilities, including foundation capabilities, model self-analysis, and extented applications; 2) the typical process of benchmark counstruction, consisting of data collection, annotation, and precautions; 3) the systematic evaluation manner composed of judge, metric, and toolkit; 4) the outlook for the next benchmark. This work aims to offer researchers an easy grasp of how to effectively evaluate MLLMs according to different needs and to inspire better evaluation methods, thereby driving the progress of MLLM research. | false | false | false | false | true | false | false | false | true | false | false | true | false | false | false | false | false | false | 510,559 |
1610.07671 | Sparser Sparse Roadmaps | We present methods for offline generation of sparse roadmap spanners that result in graphs 79% smaller than existing approaches while returning solutions of equivalent path quality. Our method uses a hybrid approach to sampling that combines traditional graph discretization with random sampling. We present techniques that optimize the graph for the L1-norm metric function commonly used in joint-based robotic planning, purposefully choosing a $t$-stretch factor based on the geometry of the space, and removing redundant edges that do not contribute to the graph quality. A high-quality pre-processed sparse roadmap is then available for re-use across many different planning scenarios using standard repair and re-plan methods. Pre-computing the roadmap offline results in more deterministic solutions, reduces the memory requirements by affording complex rejection criteria, and increases the speed of planning in high-dimensional spaces allowing more complex problems to be solved such as multi-modal task planning. Our method is validated through simulated benchmarks against the SPARS2 algorithm. The source code is freely available online as an open source extension to OMPL. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 62,821 |
2301.07074 | SegViz: A federated-learning based framework for multi-organ
segmentation on heterogeneous data sets with partial annotations | Segmentation is one of the most primary tasks in deep learning for medical imaging, owing to its multiple downstream clinical applications. However, generating manual annotations for medical images is time-consuming, requires high skill, and is an expensive effort, especially for 3D images. One potential solution is to aggregate knowledge from partially annotated datasets from multiple groups to collaboratively train global models using Federated Learning. To this end, we propose SegViz, a federated learning-based framework to train a segmentation model from distributed non-i.i.d datasets with partial annotations. The performance of SegViz was compared against training individual models separately on each dataset as well as centrally aggregating all the datasets in one place and training a single model. The SegViz framework using FedBN as the aggregation strategy demonstrated excellent performance on the external BTCV set with dice scores of 0.93, 0.83, 0.55, and 0.75 for segmentation of liver, spleen, pancreas, and kidneys, respectively, significantly ($p<0.05$) better (except spleen) than the dice scores of 0.87, 0.83, 0.42, and 0.48 for the baseline models. In contrast, the central aggregation model significantly ($p<0.05$) performed poorly on the test dataset with dice scores of 0.65, 0, 0.55, and 0.68. Our results demonstrate the potential of the SegViz framework to train multi-task models from distributed datasets with partial labels. All our implementations are open-source and available at https://anonymous.4open.science/r/SegViz-B746 | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 340,822 |
2309.09274 | Leveraging Social Discourse to Measure Check-worthiness of Claims for
Fact-checking | The expansion of online social media platforms has led to a surge in online content consumption. However, this has also paved the way for disseminating false claims and misinformation. As a result, there is an escalating demand for a substantial workforce to sift through and validate such unverified claims. Currently, these claims are manually verified by fact-checkers. Still, the volume of online content often outweighs their potency, making it difficult for them to validate every single claim in a timely manner. Thus, it is critical to determine which assertions are worth fact-checking and prioritize claims that require immediate attention. Multiple factors contribute to determining whether a claim necessitates fact-checking, encompassing factors such as its factual correctness, potential impact on the public, the probability of inciting hatred, and more. Despite several efforts to address claim check-worthiness, a systematic approach to identify these factors remains an open challenge. To this end, we introduce a new task of fine-grained claim check-worthiness, which underpins all of these factors and provides probable human grounds for identifying a claim as check-worthy. We present CheckIt, a manually annotated large Twitter dataset for fine-grained claim check-worthiness. We benchmark our dataset against a unified approach, CheckMate, that jointly determines whether a claim is check-worthy and the factors that led to that conclusion. We compare our suggested system with several baseline systems. Finally, we report a thorough analysis of results and human assessment, validating the efficacy of integrating check-worthiness factors in detecting claims worth fact-checking. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 392,544 |
2403.06269 | FastVideoEdit: Leveraging Consistency Models for Efficient Text-to-Video
Editing | Diffusion models have demonstrated remarkable capabilities in text-to-image and text-to-video generation, opening up possibilities for video editing based on textual input. However, the computational cost associated with sequential sampling in diffusion models poses challenges for efficient video editing. Existing approaches relying on image generation models for video editing suffer from time-consuming one-shot fine-tuning, additional condition extraction, or DDIM inversion, making real-time applications impractical. In this work, we propose FastVideoEdit, an efficient zero-shot video editing approach inspired by Consistency Models (CMs). By leveraging the self-consistency property of CMs, we eliminate the need for time-consuming inversion or additional condition extraction, reducing editing time. Our method enables direct mapping from source video to target video with strong preservation ability utilizing a special variance schedule. This results in improved speed advantages, as fewer sampling steps can be used while maintaining comparable generation quality. Experimental results validate the state-of-the-art performance and speed advantages of FastVideoEdit across evaluation metrics encompassing editing speed, temporal consistency, and text-video alignment. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 436,374 |
2202.05255 | Topogivity: A Machine-Learned Chemical Rule for Discovering Topological
Materials | Topological materials present unconventional electronic properties that make them attractive for both basic science and next-generation technological applications. The majority of currently known topological materials have been discovered using methods that involve symmetry-based analysis of the quantum wavefunction. Here we use machine learning to develop a simple-to-use heuristic chemical rule that diagnoses with a high accuracy whether a material is topological using only its chemical formula. This heuristic rule is based on a notion that we term topogivity, a machine-learned numerical value for each element that loosely captures its tendency to form topological materials. We next implement a high-throughput procedure for discovering topological materials based on the heuristic topogivity-rule prediction followed by ab initio validation. This way, we discover new topological materials that are not diagnosable using symmetry indicators, including several that may be promising for experimental observation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 279,819 |
1404.2074 | Renewable Powered Cellular Networks: Energy Field Modeling and Network
Coverage | Powering radio access networks using renewables, such as wind and solar power, promises dramatic reduction in the network operation cost and the network carbon footprints. However, the spatial variation of the energy field can lead to fluctuations in power supplied to the network and thereby affects its coverage. This warrants research on quantifying the aforementioned negative effect and countermeasure techniques, motivating the current work. First, a novel energy field model is presented, in which fixed maximum energy intensity $\gamma$ occurs at Poisson distributed locations, called energy centers. The intensities fall off from the centers following an exponential decay function of squared distance and the energy intensity at an arbitrary location is given by the decayed intensity from the nearest energy center. The product between the energy center density and the exponential rate of the decay function, denoted as $\psi$, is shown to determine the energy field distribution. Next, the paper considers a cellular downlink network powered by harvesting energy from the energy field and analyzes its network coverage. For the case of harvesters deployed at the same sites as base stations (BSs), as $\gamma$ increases, the mobile outage probability is shown to scale as $(c \gamma^{-\pi\psi}+p)$, where $p$ is the outage probability corresponding to a flat energy field and $c$ a constant. Subsequently, a simple scheme is proposed for counteracting the energy randomness by spatial averaging. Specifically, distributed harvesters are deployed in clusters and the generated energy from the same cluster is aggregated and then redistributed to BSs. As the cluster size increases, the power supplied to each BS is shown to converge to a constant proportional to the number of harvesters per BS. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 32,175 |
2107.05191 | The Impact of Three-phase Impedances on the Stability of DER systems | In this work we explore impedance-based interactions that arise when inverter-connected distributed energy resources (DERs) inject real and reactive power to regulate voltage and power flows on three-phase unbalanced distribution grids. We consider two inverter control frameworks that compute power setpoints: a mix of volt/var and volt/watt control, and phasor-based control. On a two-bus network we isolate how line length, R/X ratio, and mutual impedances each affects the stability of the DER system. We validate our analysis through simulation of the two-bus network, and validate that the effects found extend to the IEEE 123-node feeder. Furthermore, we find that the impedance properties make it more challenging to design a stable DER system for certain placements of DER on this feeder. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 245,702 |
2110.04507 | TiKick: Towards Playing Multi-agent Football Full Games from
Single-agent Demonstrations | Deep reinforcement learning (DRL) has achieved super-human performance on complex video games (e.g., StarCraft II and Dota II). However, current DRL systems still suffer from challenges of multi-agent coordination, sparse rewards, stochastic environments, etc. In seeking to address these challenges, we employ a football video game, e.g., Google Research Football (GRF), as our testbed and develop an end-to-end learning-based AI system (denoted as TiKick) to complete this challenging task. In this work, we first generated a large replay dataset from the self-playing of single-agent experts, which are obtained from league training. We then developed a distributed learning system and new offline algorithms to learn a powerful multi-agent AI from the fixed single-agent dataset. To the best of our knowledge, Tikick is the first learning-based AI system that can take over the multi-agent Google Research Football full game, while previous work could either control a single agent or experiment on toy academic scenarios. Extensive experiments further show that our pre-trained model can accelerate the training process of the modern multi-agent algorithm and our method achieves state-of-the-art performances on various academic scenarios. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 259,925 |
1704.07466 | Learning from Ontology Streams with Semantic Concept Drift | Data stream learning has been largely studied for extracting knowledge structures from continuous and rapid data records. In the semantic Web, data is interpreted in ontologies and its ordered sequence is represented as an ontology stream. Our work exploits the semantics of such streams to tackle the problem of concept drift i.e., unexpected changes in data distribution, causing most of models to be less accurate as time passes. To this end we revisited (i) semantic inference in the context of supervised stream learning, and (ii) models with semantic embeddings. The experiments show accurate prediction with data from Dublin and Beijing. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 72,356 |
2501.13594 | Text-to-SQL based on Large Language Models and Database Keyword Search | Text-to-SQL prompt strategies based on Large Language Models (LLMs) achieve remarkable performance on well-known benchmarks. However, when applied to real-world databases, their performance is significantly less than for these benchmarks, especially for Natural Language (NL) questions requiring complex filters and joins to be processed. This paper then proposes a strategy to compile NL questions into SQL queries that incorporates a dynamic few-shot examples strategy and leverages the services provided by a database keyword search (KwS) platform. The paper details how the precision and recall of the schema-linking process are improved with the help of the examples provided and the keyword-matching service that the KwS platform offers. Then, it shows how the KwS platform can be used to synthesize a view that captures the joins required to process an input NL question and thereby simplify the SQL query compilation step. The paper includes experiments with a real-world relational database to assess the performance of the proposed strategy. The experiments suggest that the strategy achieves an accuracy on the real-world relational database that surpasses state-of-the-art approaches. The paper concludes by discussing the results obtained. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | false | 526,748 |
2412.08973 | Is Contrastive Distillation Enough for Learning Comprehensive 3D
Representations? | Cross-modal contrastive distillation has recently been explored for learning effective 3D representations. However, existing methods focus primarily on modality-shared features, neglecting the modality-specific features during the pre-training process, which leads to suboptimal representations. In this paper, we theoretically analyze the limitations of current contrastive methods for 3D representation learning and propose a new framework, namely CMCR, to address these shortcomings. Our approach improves upon traditional methods by better integrating both modality-shared and modality-specific features. Specifically, we introduce masked image modeling and occupancy estimation tasks to guide the network in learning more comprehensive modality-specific features. Furthermore, we propose a novel multi-modal unified codebook that learns an embedding space shared across different modalities. Besides, we introduce geometry-enhanced masked image modeling to further boost 3D representation learning. Extensive experiments demonstrate that our method mitigates the challenges faced by traditional approaches and consistently outperforms existing image-to-LiDAR contrastive distillation methods in downstream tasks. Code will be available at https://github.com/Eaphan/CMCR. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 516,304 |
2410.01627 | Intent Detection in the Age of LLMs | Intent detection is a critical component of task-oriented dialogue systems (TODS) which enables the identification of suitable actions to address user utterances at each dialog turn. Traditional approaches relied on computationally efficient supervised sentence transformer encoder models, which require substantial training data and struggle with out-of-scope (OOS) detection. The emergence of generative large language models (LLMs) with intrinsic world knowledge presents new opportunities to address these challenges. In this work, we adapt 7 SOTA LLMs using adaptive in-context learning and chain-of-thought prompting for intent detection, and compare their performance with contrastively fine-tuned sentence transformer (SetFit) models to highlight prediction quality and latency tradeoff. We propose a hybrid system using uncertainty based routing strategy to combine the two approaches that along with negative data augmentation results in achieving the best of both worlds ( i.e. within 2% of native LLM accuracy with 50% less latency). To better understand LLM OOS detection capabilities, we perform controlled experiments revealing that this capability is significantly influenced by the scope of intent labels and the size of the label space. We also introduce a two-step approach utilizing internal LLM representations, demonstrating empirical gains in OOS detection accuracy and F1-score by >5% for the Mistral-7B model. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 493,857 |
2006.00731 | Second-Order Provable Defenses against Adversarial Attacks | A robustness certificate is the minimum distance of a given input to the decision boundary of the classifier (or its lower bound). For {\it any} input perturbations with a magnitude smaller than the certificate value, the classification output will provably remain unchanged. Exactly computing the robustness certificates for neural networks is difficult since it requires solving a non-convex optimization. In this paper, we provide computationally-efficient robustness certificates for neural networks with differentiable activation functions in two steps. First, we show that if the eigenvalues of the Hessian of the network are bounded, we can compute a robustness certificate in the $l_2$ norm efficiently using convex optimization. Second, we derive a computationally-efficient differentiable upper bound on the curvature of a deep network. We also use the curvature bound as a regularization term during the training of the network to boost its certified robustness. Putting these results together leads to our proposed {\bf C}urvature-based {\bf R}obustness {\bf C}ertificate (CRC) and {\bf C}urvature-based {\bf R}obust {\bf T}raining (CRT). Our numerical results show that CRT leads to significantly higher certified robust accuracy compared to interval-bound propagation (IBP) based training. We achieve certified robust accuracy 69.79\%, 57.78\% and 53.19\% while IBP-based methods achieve 44.96\%, 44.74\% and 44.66\% on 2,3 and 4 layer networks respectively on the MNIST-dataset. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 179,559 |
2009.06156 | AutoML for Multilayer Perceptron and FPGA Co-design | State-of-the-art Neural Network Architectures (NNAs) are challenging to design and implement efficiently in hardware. In the past couple of years, this has led to an explosion in research and development of automatic Neural Architecture Search (NAS) tools. AutomML tools are now used to achieve state of the art NNA designs and attempt to optimize for hardware usage and design. Much of the recent research in the auto-design of NNAs has focused on convolution networks and image recognition, ignoring the fact that a significant part of the workload in data centers is general-purpose deep neural networks. In this work, we develop and test a general multilayer perceptron (MLP) flow that can take arbitrary datasets as input and automatically produce optimized NNAs and hardware designs. We test the flow on six benchmarks. Our results show we exceed the performance of currently published MLP accuracy results and are competitive with non-MLP based results. We compare general and common GPU architectures with our scalable FPGA design and show we can achieve higher efficiency and higher throughput (outputs per second) for the majority of datasets. Further insights into the design space for both accurate networks and high performing hardware shows the power of co-design by correlating accuracy versus throughput, network size versus accuracy, and scaling to high-performance devices. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | true | 195,543 |
0910.1849 | Color Image Clustering using Block Truncation Algorithm | With the advancement in image capturing device, the image data been generated at high volume. If images are analyzed properly, they can reveal useful information to the human users. Content based image retrieval address the problem of retrieving images relevant to the user needs from image databases on the basis of low-level visual features that can be derived from the images. Grouping images into meaningful categories to reveal useful information is a challenging and important problem. Clustering is a data mining technique to group a set of unsupervised data based on the conceptual clustering principal: maximizing the intraclass similarity and minimizing the interclass similarity. Proposed framework focuses on color as feature. Color Moment and Block Truncation Coding (BTC) are used to extract features for image dataset. Experimental study using K-Means clustering algorithm is conducted to group the image dataset into various clusters. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 4,694 |
2305.00382 | Constructing a Knowledge Graph from Textual Descriptions of Software
Vulnerabilities in the National Vulnerability Database | Knowledge graphs have shown promise for several cybersecurity tasks, such as vulnerability assessment and threat analysis. In this work, we present a new method for constructing a vulnerability knowledge graph from information in the National Vulnerability Database (NVD). Our approach combines named entity recognition (NER), relation extraction (RE), and entity prediction using a combination of neural models, heuristic rules, and knowledge graph embeddings. We demonstrate how our method helps to fix missing entities in knowledge graphs used for cybersecurity and evaluate the performance. | false | false | false | false | true | false | false | false | true | false | false | false | true | false | false | false | false | true | 361,317 |
2411.15657 | Training an Open-Vocabulary Monocular 3D Object Detection Model without
3D Data | Open-vocabulary 3D object detection has recently attracted considerable attention due to its broad applications in autonomous driving and robotics, which aims to effectively recognize novel classes in previously unseen domains. However, existing point cloud-based open-vocabulary 3D detection models are limited by their high deployment costs. In this work, we propose a novel open-vocabulary monocular 3D object detection framework, dubbed OVM3D-Det, which trains detectors using only RGB images, making it both cost-effective and scalable to publicly available data. Unlike traditional methods, OVM3D-Det does not require high-precision LiDAR or 3D sensor data for either input or generating 3D bounding boxes. Instead, it employs open-vocabulary 2D models and pseudo-LiDAR to automatically label 3D objects in RGB images, fostering the learning of open-vocabulary monocular 3D detectors. However, training 3D models with labels directly derived from pseudo-LiDAR is inadequate due to imprecise boxes estimated from noisy point clouds and severely occluded objects. To address these issues, we introduce two innovative designs: adaptive pseudo-LiDAR erosion and bounding box refinement with prior knowledge from large language models. These techniques effectively calibrate the 3D labels and enable RGB-only training for 3D detectors. Extensive experiments demonstrate the superiority of OVM3D-Det over baselines in both indoor and outdoor scenarios. The code will be released. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 510,708 |
2407.09722 | Optimized Multi-Token Joint Decoding with Auxiliary Model for LLM
Inference | Large language models (LLMs) have achieved remarkable success across diverse tasks, yet their inference processes are hindered by substantial time and energy demands due to single-token generation at each decoding step. While previous methods such as speculative decoding mitigate these inefficiencies by producing multiple tokens per step, each token is still generated by its single-token distribution, thereby enhancing speed without improving effectiveness. In contrast, our work simultaneously enhances inference speed and improves the output effectiveness. We consider multi-token joint decoding (MTJD), which generates multiple tokens from their joint distribution at each iteration, theoretically reducing perplexity and enhancing task performance. However, MTJD suffers from the high cost of sampling from the joint distribution of multiple tokens. Inspired by speculative decoding, we introduce multi-token assisted decoding (MTAD), a novel framework designed to accelerate MTJD. MTAD leverages a smaller auxiliary model to approximate the joint distribution of a larger model, incorporating a verification mechanism that not only ensures the accuracy of this approximation, but also improves the decoding efficiency over conventional speculative decoding. Theoretically, we demonstrate that MTAD closely approximates exact MTJD with bounded error. Empirical evaluations using Llama-2 and OPT models ranging from 13B to 70B parameters across various tasks reveal that MTAD reduces perplexity by 21.2% and improves downstream performance compared to standard single-token sampling. Furthermore, MTAD achieves a 1.42x speed-up and consumes 1.54x less energy than conventional speculative decoding methods. These results highlight MTAD's ability to make multi-token joint decoding both effective and efficient, promoting more sustainable and high-performance deployment of LLMs. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 472,694 |
2402.08763 | Enhancing Robustness of Indoor Robotic Navigation with Free-Space
Segmentation Models Against Adversarial Attacks | Endeavors in indoor robotic navigation rely on the accuracy of segmentation models to identify free space in RGB images. However, deep learning models are vulnerable to adversarial attacks, posing a significant challenge to their real-world deployment. In this study, we identify vulnerabilities within the hidden layers of neural networks and introduce a practical approach to reinforce traditional adversarial training. Our method incorporates a novel distance loss function, minimizing the gap between hidden layers in clean and adversarial images. Experiments demonstrate satisfactory performance in improving the model's robustness against adversarial perturbations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 429,223 |
2203.12208 | Self-supervised Learning of Adversarial Example: Towards Good
Generalizations for Deepfake Detection | Recent studies in deepfake detection have yielded promising results when the training and testing face forgeries are from the same dataset. However, the problem remains challenging when one tries to generalize the detector to forgeries created by unseen methods in the training dataset. This work addresses the generalizable deepfake detection from a simple principle: a generalizable representation should be sensitive to diverse types of forgeries. Following this principle, we propose to enrich the "diversity" of forgeries by synthesizing augmented forgeries with a pool of forgery configurations and strengthen the "sensitivity" to the forgeries by enforcing the model to predict the forgery configurations. To effectively explore the large forgery augmentation space, we further propose to use the adversarial training strategy to dynamically synthesize the most challenging forgeries to the current model. Through extensive experiments, we show that the proposed strategies are surprisingly effective (see Figure 1), and they could achieve superior performance than the current state-of-the-art methods. Code is available at \url{https://github.com/liangchen527/SLADD}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 287,181 |
1607.00548 | Active Object Localization in Visual Situations | We describe a method for performing active localization of objects in instances of visual situations. A visual situation is an abstract concept---e.g., "a boxing match", "a birthday party", "walking the dog", "waiting for a bus"---whose image instantiations are linked more by their common spatial and semantic structure than by low-level visual similarity. Our system combines given and learned knowledge of the structure of a particular situation, and adapts that knowledge to a new situation instance as it actively searches for objects. More specifically, the system learns a set of probability distributions describing spatial and other relationships among relevant objects. The system uses those distributions to iteratively sample object proposals on a test image, but also continually uses information from those object proposals to adaptively modify the distributions based on what the system has detected. We test our approach's ability to efficiently localize objects, using a situation-specific image dataset created by our group. We compare the results with several baselines and variations on our method, and demonstrate the strong benefit of using situation knowledge and active context-driven localization. Finally, we contrast our method with several other approaches that use context as well as active search for object localization in images. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 58,097 |
1803.07351 | Discrete Potts Model for Generating Superpixels on Noisy Images | Many computer vision applications, such as object recognition and segmentation, increasingly build on superpixels. However, there have been so far few superpixel algorithms that systematically deal with noisy images. We propose to first decompose the image into equal-sized rectangular patches, which also sets the maximum superpixel size. Within each patch, a Potts model for simultaneous segmentation and denoising is applied, that guarantees connected and non-overlapping superpixels and also produces a denoised image. The corresponding optimization problem is formulated as a mixed integer linear program (MILP), and solved by a commercial solver. Extensive experiments on the BSDS500 dataset images with noises are compared with other state-of-the-art superpixel methods. Our method achieves the best result in terms of a combined score (OP) composed of the under-segmentation error, boundary recall and compactness. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 93,034 |
1410.1160 | On Benchmarking Intrusion Detection Systems in Virtualized Environments | Modern intrusion detection systems (IDSes) for virtualized environments are deployed in the virtualization layer with components inside the virtual machine monitor (VMM) and the trusted host virtual machine (VM). Such IDSes can monitor at the same time the network and host activities of all guest VMs running on top of a VMM being isolated from malicious users of these VMs. We refer to IDSes for virtualized environments as VMM-based IDSes. In this work, we analyze state-of-the-art intrusion detection techniques applied in virtualized environments and architectures of VMM-based IDSes. Further, we identify challenges that apply specifically to benchmarking VMM-based IDSes focussing on workloads and metrics. For example, we discuss the challenge of defining representative baseline benign workload profiles as well as the challenge of defining malicious workloads containing attacks targeted at the VMM. We also discuss the impact of on-demand resource provisioning features of virtualized environments (e.g., CPU and memory hotplugging, memory ballooning) on IDS benchmarking measures such as capacity and attack detection accuracy. Finally, we outline future research directions in the area of benchmarking VMM-based IDSes and of intrusion detection in virtualized environments in general. | false | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | false | 36,534 |
2405.20892 | MALT: Multi-scale Action Learning Transformer for Online Action
Detection | Online action detection (OAD) aims to identify ongoing actions from streaming video in real-time, without access to future frames. Since these actions manifest at varying scales of granularity, ranging from coarse to fine, projecting an entire set of action frames to a single latent encoding may result in a lack of local information, necessitating the acquisition of action features across multiple scales. In this paper, we propose a multi-scale action learning transformer (MALT), which includes a novel recurrent decoder (used for feature fusion) that includes fewer parameters and can be trained more efficiently. A hierarchical encoder with multiple encoding branches is further proposed to capture multi-scale action features. The output from the preceding branch is then incrementally input to the subsequent branch as part of a cross-attention calculation. In this way, output features transition from coarse to fine as the branches deepen. We also introduce an explicit frame scoring mechanism employing sparse attention, which filters irrelevant frames more efficiently, without requiring an additional network. The proposed method achieved state-of-the-art performance on two benchmark datasets (THUMOS'14 and TVSeries), outperforming all existing models used for comparison, with an mAP of 0.2% for THUMOS'14 and an mcAP of 0.1% for TVseries. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 459,567 |
1701.03383 | Secure communications with cooperative jamming: Optimal power allocation
and secrecy outage analysis | This paper studies the secrecy rate maximization problem of a secure wireless communication system, in the presence of multiple eavesdroppers. The security of the communication link is enhanced through cooperative jamming, with the help of multiple jammers. First, a feasibility condition is derived to achieve a positive secrecy rate at the destination. Then, we solve the original secrecy rate maximization problem, which is not convex in terms of power allocation at the jammers. To circumvent this non-convexity, the achievable secrecy rate is approximated for a given power allocation at the jammers and the approximated problem is formulated into a geometric programming one. Based on this approximation, an iterative algorithm has been developed to obtain the optimal power allocation at the jammers. Next, we provide a bisection approach, based on one-dimensional search, to validate the optimality of the proposed algorithm. In addition, by assuming Rayleigh fading, the secrecy outage probability (SOP) of the proposed cooperative jamming scheme is analyzed. More specifically, a single-integral form expression for SOP is derived for the most general case as well as a closed-form expression for the special case of two cooperative jammers and one eavesdropper. Simulation results have been provided to validate the convergence and the optimality of the proposed algorithm as well as the theoretical derivations of the presented SOP analysis. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 66,697 |
2206.14882 | LIDL: Local Intrinsic Dimension Estimation Using Approximate Likelihood | Most of the existing methods for estimating the local intrinsic dimension of a data distribution do not scale well to high-dimensional data. Many of them rely on a non-parametric nearest neighbors approach which suffers from the curse of dimensionality. We attempt to address that challenge by proposing a novel approach to the problem: Local Intrinsic Dimension estimation using approximate Likelihood (LIDL). Our method relies on an arbitrary density estimation method as its subroutine and hence tries to sidestep the dimensionality challenge by making use of the recent progress in parametric neural methods for likelihood estimation. We carefully investigate the empirical properties of the proposed method, compare them with our theoretical predictions, and show that LIDL yields competitive results on the standard benchmarks for this problem and that it scales to thousands of dimensions. What is more, we anticipate this approach to improve further with the continuing advances in the density estimation literature. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 305,415 |
0712.2789 | Trading in Risk Dimensions (TRD) | Previous work, mostly published, developed two-shell recursive trading systems. An inner-shell of Canonical Momenta Indicators (CMI) is adaptively fit to incoming market data. A parameterized trading-rule outer-shell uses the global optimization code Adaptive Simulated Annealing (ASA) to fit the trading system to historical data. A simple fitting algorithm, usually not requiring ASA, is used for the inner-shell fit. An additional risk-management middle-shell has been added to create a three-shell recursive optimization/sampling/fitting algorithm. Portfolio-level distributions of copula-transformed multivariate distributions (with constituent markets possessing different marginal distributions in returns space) are generated by Monte Carlo samplings. ASA is used to importance-sample weightings of these markets. The core code, Trading in Risk Dimensions (TRD), processes Training and Testing trading systems on historical data, and consistently interacts with RealTime trading platforms at minute resolutions, but this scale can be modified. This approach transforms constituent probability distributions into a common space where it makes sense to develop correlations to further develop probability distributions and risk/uncertainty analyses of the full portfolio. ASA is used for importance-sampling these distributions and for optimizing system parameters. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 1,049 |
1905.12969 | Enriched Mixtures of Gaussian Process Experts | Mixtures of experts probabilistically divide the input space into regions, where the assumptions of each expert, or conditional model, need only hold locally. Combined with Gaussian process (GP) experts, this results in a powerful and highly flexible model. We focus on alternative mixtures of GP experts, which model the joint distribution of the inputs and targets explicitly. We highlight issues of this approach in multi-dimensional input spaces, namely, poor scalability and the need for an unnecessarily large number of experts, degrading the predictive performance and increasing uncertainty. We construct a novel model to address these issues through a nested partitioning scheme that automatically infers the number of components at both levels. Multiple response types are accommodated through a generalised GP framework, while multiple input types are included through a factorised exponential family structure. We show the effectiveness of our approach in estimating a parsimonious probabilistic description of both synthetic data of increasing dimension and an Alzheimer's challenge dataset. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 132,957 |
2401.12455 | Multi-agent deep reinforcement learning with centralized training and
decentralized execution for transportation infrastructure management | We present a multi-agent Deep Reinforcement Learning (DRL) framework for managing large transportation infrastructure systems over their life-cycle. Life-cycle management of such engineering systems is a computationally intensive task, requiring appropriate sequential inspection and maintenance decisions able to reduce long-term risks and costs, while dealing with different uncertainties and constraints that lie in high-dimensional spaces. To date, static age- or condition-based maintenance methods and risk-based or periodic inspection plans have mostly addressed this class of optimization problems. However, optimality, scalability, and uncertainty limitations are often manifested under such approaches. The optimization problem in this work is cast in the framework of constrained Partially Observable Markov Decision Processes (POMDPs), which provides a comprehensive mathematical basis for stochastic sequential decision settings with observation uncertainties, risk considerations, and limited resources. To address significantly large state and action spaces, a Deep Decentralized Multi-agent Actor-Critic (DDMAC) DRL method with Centralized Training and Decentralized Execution (CTDE), termed as DDMAC-CTDE is developed. The performance strengths of the DDMAC-CTDE method are demonstrated in a generally representative and realistic example application of an existing transportation network in Virginia, USA. The network includes several bridge and pavement components with nonstationary degradation, agency-imposed constraints, and traffic delay and risk considerations. Compared to traditional management policies for transportation networks, the proposed DDMAC-CTDE method vastly outperforms its counterparts. Overall, the proposed algorithmic framework provides near optimal solutions for transportation infrastructure management under real-world constraints and complexities. | false | false | false | false | true | false | true | false | false | false | true | false | false | false | true | false | false | false | 423,380 |
1508.04211 | Scalable Bayesian Non-Negative Tensor Factorization for Massive Count
Data | We present a Bayesian non-negative tensor factorization model for count-valued tensor data, and develop scalable inference algorithms (both batch and online) for dealing with massive tensors. Our generative model can handle overdispersed counts as well as infer the rank of the decomposition. Moreover, leveraging a reparameterization of the Poisson distribution as a multinomial facilitates conjugacy in the model and enables simple and efficient Gibbs sampling and variational Bayes (VB) inference updates, with a computational cost that only depends on the number of nonzeros in the tensor. The model also provides a nice interpretability for the factors; in our model, each factor corresponds to a "topic". We develop a set of online inference algorithms that allow further scaling up the model to massive tensors, for which batch inference methods may be infeasible. We apply our framework on diverse real-world applications, such as \emph{multiway} topic modeling on a scientific publications database, analyzing a political science data set, and analyzing a massive household transactions data set. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 46,105 |
2303.09461 | ChatGPT Participates in a Computer Science Exam | We asked ChatGPT to participate in an undergraduate computer science exam on ''Algorithms and Data Structures''. The program was evaluated on the entire exam as posed to the students. We hand-copied its answers onto an exam sheet, which was subsequently graded in a blind setup alongside those of 200 participating students. We find that ChatGPT narrowly passed the exam, obtaining 20.5 out of 40 points. This impressive performance indicates that ChatGPT can indeed succeed in challenging tasks like university exams. At the same time, the questions in our exam are structurally similar to those of other exams, solved homework problems, and teaching materials that can be found online and might have been part of ChatGPT's training data. Therefore, it would be inadequate to conclude from this experiment that ChatGPT has any understanding of computer science. We also assess the improvements brought by GPT-4. We find that GPT-4 would have obtained about 17\% more exam points than GPT-3.5, reaching the performance of the average student. The transcripts of our conversations with ChatGPT are available at \url{https://github.com/tml-tuebingen/chatgpt-algorithm-exam}, and the entire graded exam is in the appendix of this paper. | false | false | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | 352,053 |
2208.02442 | FedDRL: Deep Reinforcement Learning-based Adaptive Aggregation for
Non-IID Data in Federated Learning | The uneven distribution of local data across different edge devices (clients) results in slow model training and accuracy reduction in federated learning. Naive federated learning (FL) strategy and most alternative solutions attempted to achieve more fairness by weighted aggregating deep learning models across clients. This work introduces a novel non-IID type encountered in real-world datasets, namely cluster-skew, in which groups of clients have local data with similar distributions, causing the global model to converge to an over-fitted solution. To deal with non-IID data, particularly the cluster-skewed data, we propose FedDRL, a novel FL model that employs deep reinforcement learning to adaptively determine each client's impact factor (which will be used as the weights in the aggregation process). Extensive experiments on a suite of federated datasets confirm that the proposed FedDRL improves favorably against FedAvg and FedProx methods, e.g., up to 4.05% and 2.17% on average for the CIFAR-100 dataset, respectively. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 311,467 |
2110.10205 | MultiHead MultiModal Deep Interest Recommendation Network | With the development of information technology, human beings are constantly producing a large amount of information at all times. How to obtain the information that users are interested in from the large amount of information has become an issue of great concern to users and even business managers. In order to solve this problem, from traditional machine learning to deep learning recommendation systems, researchers continue to improve optimization models and explore solutions. Because researchers have optimized more on the recommendation model network structure, they have less research on enriching recommendation model features, and there is still room for in-depth recommendation model optimization. Based on the DIN\cite{Authors01} model, this paper adds multi-head and multi-modal modules, which enriches the feature sets that the model can use, and at the same time strengthens the cross-combination and fitting capabilities of the model. Experiments show that the multi-head multi-modal DIN improves the recommendation prediction effect, and outperforms current state-of-the-art methods on various comprehensive indicators. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 262,063 |
1810.13044 | Scalable Laplacian K-modes | We advocate Laplacian K-modes for joint clustering and density mode finding, and propose a concave-convex relaxation of the problem, which yields a parallel algorithm that scales up to large datasets and high dimensions. We optimize a tight bound (auxiliary function) of our relaxation, which, at each iteration, amounts to computing an independent update for each cluster-assignment variable, with guaranteed convergence. Therefore, our bound optimizer can be trivially distributed for large-scale data sets. Furthermore, we show that the density modes can be obtained as byproducts of the assignment variables via simple maximum-value operations whose additional computational cost is linear in the number of data points. Our formulation does not need storing a full affinity matrix and computing its eigenvalue decomposition, neither does it perform expensive projection steps and Lagrangian-dual inner iterates for the simplex constraints of each point. Furthermore, unlike mean-shift, our density-mode estimation does not require inner-loop gradient-ascent iterates. It has a complexity independent of feature-space dimension, yields modes that are valid data points in the input set and is applicable to discrete domains as well as arbitrary kernels. We report comprehensive experiments over various data sets, which show that our algorithm yields very competitive performances in term of optimization quality (i.e., the value of the discrete-variable objective at convergence) and clustering accuracy. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 111,889 |
2012.11400 | Unifying Homophily and Heterophily Network Transformation via Motifs | Higher-order proximity (HOP) is fundamental for most network embedding methods due to its significant effects on the quality of node embedding and performance on downstream network analysis tasks. Most existing HOP definitions are based on either homophily to place close and highly interconnected nodes tightly in embedding space or heterophily to place distant but structurally similar nodes together after embedding. In real-world networks, both can co-exist, and thus considering only one could limit the prediction performance and interpretability. However, there is no general and universal solution that takes both into consideration. In this paper, we propose such a simple yet powerful framework called homophily and heterophliy preserving network transformation (H2NT) to capture HOP that flexibly unifies homophily and heterophily. Specifically, H2NT utilises motif representations to transform a network into a new network with a hybrid assumption via micro-level and macro-level walk paths. H2NT can be used as an enhancer to be integrated with any existing network embedding methods without requiring any changes to latter methods. Because H2NT can sparsify networks with motif structures, it can also improve the computational efficiency of existing network embedding methods when integrated. We conduct experiments on node classification, structural role classification and motif prediction to show the superior prediction performance and computational efficiency over state-of-the-art methods. In particular, DeepWalk-based H2 NT achieves 24% improvement in terms of precision on motif prediction, while reducing 46% computational time compared to the original DeepWalk. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 212,628 |
2303.07131 | Evolutionary quantum feature selection | Effective feature selection is essential for enhancing the performance of artificial intelligence models. It involves identifying feature combinations that optimize a given metric, but this is a challenging task due to the problem's exponential time complexity. In this study, we present an innovative heuristic called Evolutionary Quantum Feature Selection (EQFS) that employs the Quantum Circuit Evolution (QCE) algorithm. Our approach harnesses the unique capabilities of QCE, which utilizes shallow depth circuits to generate sparse probability distributions. Our computational experiments demonstrate that EQFS can identify good feature combinations with quadratic scaling in the number of features. To evaluate EQFS's performance, we counted the number of times a given classical model assesses the cost function for a specific metric, as a function of the number of generations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 351,133 |
1804.09401 | Generative Temporal Models with Spatial Memory for Partially Observed
Environments | In model-based reinforcement learning, generative and temporal models of environments can be leveraged to boost agent performance, either by tuning the agent's representations during training or via use as part of an explicit planning mechanism. However, their application in practice has been limited to simplistic environments, due to the difficulty of training such models in larger, potentially partially-observed and 3D environments. In this work we introduce a novel action-conditioned generative model of such challenging environments. The model features a non-parametric spatial memory system in which we store learned, disentangled representations of the environment. Low-dimensional spatial updates are computed using a state-space model that makes use of knowledge on the prior dynamics of the moving agent, and high-dimensional visual observations are modelled with a Variational Auto-Encoder. The result is a scalable architecture capable of performing coherent predictions over hundreds of time steps across a range of partially observed 2D and 3D environments. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 95,967 |
1809.07778 | Moderate deviation analysis of majorisation-based resource
interconversion | We consider the problem of interconverting a finite amount of resources within all theories whose single-shot transformation rules are based on a majorisation relation, e.g. the resource theories of entanglement and coherence (for pure state transformations), as well as thermodynamics (for energy-incoherent transformations). When only finite resources are available we expect to see a non-trivial trade-off between the rate $r_n$ at which $n$ copies of a resource state $\rho$ can be transformed into $nr_n$ copies of another resource state $\sigma$, and the error level $\varepsilon_n$ of the interconversion process, as a function of $n$. In this work we derive the optimal trade-off in the so-called moderate deviation regime, where the rate of interconversion $r_n$ approaches its optimum in the asymptotic limit of unbounded resources ($n\to\infty$), while the error $\epsilon_n$ vanishes in the same limit. We find that the moderate deviation analysis exhibits a resonance behaviour which implies that certain pairs of resource states can be interconverted at the asymptotically optimal rate with negligible error, even in the finite $n$ regime. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 108,352 |
2010.08596 | Efficient Robotic Object Search via HIEM: Hierarchical Policy Learning
with Intrinsic-Extrinsic Modeling | Despite the significant success at enabling robots with autonomous behaviors makes deep reinforcement learning a promising approach for robotic object search task, the deep reinforcement learning approach severely suffers from the nature sparse reward setting of the task. To tackle this challenge, we present a novel policy learning paradigm for the object search task, based on hierarchical and interpretable modeling with an intrinsic-extrinsic reward setting. More specifically, we explore the environment efficiently through a proxy low-level policy which is driven by the intrinsic rewarding sub-goals. We further learn our hierarchical policy from the efficient exploration experience where we optimize both of our high-level and low-level policies towards the extrinsic rewarding goal to perform the object search task well. Experiments conducted on the House3D environment validate and show that the robot, trained with our model, can perform the object search task in a more optimal and interpretable way. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 201,225 |
2108.11811 | When should agents explore? | Exploration remains a central challenge for reinforcement learning (RL). Virtually all existing methods share the feature of a monolithic behaviour policy that changes only gradually (at best). In contrast, the exploratory behaviours of animals and humans exhibit a rich diversity, namely including forms of switching between modes. This paper presents an initial study of mode-switching, non-monolithic exploration for RL. We investigate different modes to switch between, at what timescales it makes sense to switch, and what signals make for good switching triggers. We also propose practical algorithmic components that make the switching mechanism adaptive and robust, which enables flexibility without an accompanying hyper-parameter-tuning burden. Finally, we report a promising and detailed analysis on Atari, using two-mode exploration and switching at sub-episodic time-scales. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 252,299 |
2501.09597 | Reducing the Sensitivity of Neural Physics Simulators to Mesh Topology
via Pretraining | Meshes are used to represent complex objects in high fidelity physics simulators across a variety of domains, such as radar sensing and aerodynamics. There is growing interest in using neural networks to accelerate physics simulations, and also a growing body of work on applying neural networks directly to irregular mesh data. Since multiple mesh topologies can represent the same object, mesh augmentation is typically required to handle topological variation when training neural networks. Due to the sensitivity of physics simulators to small changes in mesh shape, it is challenging to use these augmentations when training neural network-based physics simulators. In this work, we show that variations in mesh topology can significantly reduce the performance of neural network simulators. We evaluate whether pretraining can be used to address this issue, and find that employing an established autoencoder pretraining technique with graph embedding models reduces the sensitivity of neural network simulators to variations in mesh topology. Finally, we highlight future research directions that may further reduce neural simulator sensitivity to mesh topology. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 525,196 |
1809.02855 | iDriveSense: Dynamic Route Planning Involving Roads Quality Information | Owing to the expeditious growth in the information and communication technologies, smart cities have raised the expectations in terms of efficient functioning and management. One key aspect of residents' daily comfort is assured through affording reliable traffic management and route planning. Comprehensively, the majority of the present trip planning applications and service providers are enabling their trip planning recommendations relying on shortest paths and/or fastest routes. However, such suggestions may discount drivers' preferences with respect to safe and less disturbing trips. Road anomalies such as cracks, potholes, and manholes induce risky driving scenarios and can lead to vehicles damages and costly repairs. Accordingly, in this paper, we propose a crowdsensing based dynamic route planning system. Leveraging both the vehicle motion sensors and the inertial sensors within the smart devices, road surface types and anomalies have been detected and categorized. In addition, the monitored events are geo-referenced utilizing GPS receivers on both vehicles and smart devices. Consequently, road segments assessments are conducted using fuzzy system models based on aspects such as the number of anomalies and their severity levels in each road segment. Afterward, another fuzzy model is adopted to recommend the best trip routes based on the road segments quality in each potential route. Extensive road experiments are held to build and show the potential of the proposed system. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 107,157 |
1906.04111 | A New Ratio Image Based CNN Algorithm For SAR Despeckling | In SAR domain many application like classification, detection and segmentation are impaired by speckle. Hence, despeckling of SAR images is the key for scene understanding. Usually despeckling filters face the trade-off of speckle suppression and information preservation. In the last years deep learning solutions for speckle reduction have been proposed. One the biggest issue for these methods is how to train a network given the lack of a reference. In this work we proposed a convolutional neural network based solution trained on simulated data. We propose the use of a cost function taking into account both spatial and statistical properties. The aim is two fold: overcome the trade-off between speckle suppression and details suppression; find a suitable cost function for despeckling in unsupervised learning. The algorithm is validated on both real and simulated data, showing interesting performances. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 134,607 |
2210.13968 | Faster Projection-Free Augmented Lagrangian Methods via Weak Proximal
Oracle | This paper considers a convex composite optimization problem with affine constraints, which includes problems that take the form of minimizing a smooth convex objective function over the intersection of (simple) convex sets, or regularized with multiple (simple) functions. Motivated by high-dimensional applications in which exact projection/proximal computations are not tractable, we propose a \textit{projection-free} augmented Lagrangian-based method, in which primal updates are carried out using a \textit{weak proximal oracle} (WPO). In an earlier work, WPO was shown to be more powerful than the standard \textit{linear minimization oracle} (LMO) that underlies conditional gradient-based methods (aka Frank-Wolfe methods). Moreover, WPO is computationally tractable for many high-dimensional problems of interest, including those motivated by recovery of low-rank matrices and tensors, and optimization over polytopes which admit efficient LMOs. The main result of this paper shows that under a certain curvature assumption (which is weaker than strong convexity), our WPO-based algorithm achieves an ergodic rate of convergence of $O(1/T)$ for both the objective residual and feasibility gap. This result, to the best of our knowledge, improves upon the $O(1/\sqrt{T})$ rate for existing LMO-based projection-free methods for this class of problems. Empirical experiments on a low-rank and sparse covariance matrix estimation task and the Max Cut semidefinite relaxation demonstrate that of our method can outperform state-of-the-art LMO-based Lagrangian-based methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 326,380 |
2112.01236 | Local Justice and the Algorithmic Allocation of Societal Resources | AI is increasingly used to aid decision-making about the allocation of scarce societal resources, for example housing for homeless people, organs for transplantation, and food donations. Recently, there have been several proposals for how to design objectives for these systems that attempt to achieve some combination of fairness, efficiency, incentive compatibility, and satisfactory aggregation of stakeholder preferences. This paper lays out possible roles and opportunities for AI in this domain, arguing for a closer engagement with the political philosophy literature on local justice, which provides a framework for thinking about how societies have over time framed objectives for such allocation problems. It also discusses how we may be able to integrate into this framework the opportunities and risks opened up by the ubiquity of data and the availability of algorithms that can use them to make accurate predictions about the future. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 269,417 |
1506.05889 | Constrained adaptive sensing | Suppose that we wish to estimate a vector $\mathbf{x} \in \mathbb{C}^n$ from a small number of noisy linear measurements of the form $\mathbf{y} = \mathbf{A x} + \mathbf{z}$, where $\mathbf{z}$ represents measurement noise. When the vector $\mathbf{x}$ is sparse, meaning that it has only $s$ nonzeros with $s \ll n$, one can obtain a significantly more accurate estimate of $\mathbf{x}$ by adaptively selecting the rows of $\mathbf{A}$ based on the previous measurements provided that the signal-to-noise ratio (SNR) is sufficiently large. In this paper we consider the case where we wish to realize the potential of adaptivity but where the rows of $\mathbf{A}$ are subject to physical constraints. In particular, we examine the case where the rows of $\mathbf{A}$ are constrained to belong to a finite set of allowable measurement vectors. We demonstrate both the limitations and advantages of adaptive sensing in this constrained setting. We prove that for certain measurement ensembles, the benefits offered by adaptive designs fall far short of the improvements that are possible in the unconstrained adaptive setting. On the other hand, we also provide both theoretical and empirical evidence that in some scenarios adaptivity does still result in substantial improvements even in the constrained setting. To illustrate these potential gains, we propose practical algorithms for constrained adaptive sensing by exploiting connections to the theory of optimal experimental design and show that these algorithms exhibit promising performance in some representative applications. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 44,348 |
1804.09601 | Optimizing gas networks using adjoint gradients | An increasing amount of gas-fired power plants are currently being installed in modern power grids worldwide. This is due to their low cost and the inherent flexibility offered to the electrical network, particularly in the face of increasing renewable generation. However, the integration and operation of gas generators poses additional challenges to gas network operators, mainly because they can induce rapid changes in the demand. This paper presents an efficient minimization scheme of gas compression costs under dynamic conditions where deliveries to customers are described by time-dependent mass flows. The optimization scheme is comprised of a set of transient nonlinear partial differential equations that model the isothermal gas flow in pipes, an adjoint problem for efficient calculation of the objective gradients and constraint Jacobians, and state-of-the-art optimal control methods for solving nonlinear programs. As the evaluation of constraint Jacobians can become computationally costly as the number of constraints increases, efficient constraint lumping schemes are proposed and investigated with respect to accuracy and performance. The resulting optimal control problems are solved using both interior-point and sequential quadratic programming methods. The proposed optimization framework is validated through several benchmark cases of increasing complexity. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 96,008 |
2012.07495 | Deep Learning for Material recognition: most recent advances and open
challenges | Recognizing material from color images is still a challenging problem today. While deep neural networks provide very good results on object recognition and has been the topic of a huge amount of papers in the last decade, their adaptation to material images still requires some works to reach equivalent accuracies. Nevertheless, recent studies achieve very good results in material recognition with deep learning and we propose, in this paper, to review most of them by focusing on three aspects: material image datasets, influence of the context and ad hoc descriptors for material appearance. Every aspect is introduced by a systematic manner and results from representative works are cited. We also present our own studies in this area and point out some open challenges for future works. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 211,474 |
2310.09441 | MEMTRACK: A Deep Learning-Based Approach to Microrobot Tracking in Dense
and Low-Contrast Environments | Tracking microrobots is challenging, considering their minute size and high speed. As the field progresses towards developing microrobots for biomedical applications and conducting mechanistic studies in physiologically relevant media (e.g., collagen), this challenge is exacerbated by the dense surrounding environments with feature size and shape comparable to microrobots. Herein, we report Motion Enhanced Multi-level Tracker (MEMTrack), a robust pipeline for detecting and tracking microrobots using synthetic motion features, deep learning-based object detection, and a modified Simple Online and Real-time Tracking (SORT) algorithm with interpolation for tracking. Our object detection approach combines different models based on the object's motion pattern. We trained and validated our model using bacterial micro-motors in collagen (tissue phantom) and tested it in collagen and aqueous media. We demonstrate that MEMTrack accurately tracks even the most challenging bacteria missed by skilled human annotators, achieving precision and recall of 77% and 48% in collagen and 94% and 35% in liquid media, respectively. Moreover, we show that MEMTrack can quantify average bacteria speed with no statistically significant difference from the laboriously-produced manual tracking data. MEMTrack represents a significant contribution to microrobot localization and tracking, and opens the potential for vision-based deep learning approaches to microrobot control in dense and low-contrast settings. All source code for training and testing MEMTrack and reproducing the results of the paper have been made publicly available https://github.com/sawhney-medha/MEMTrack. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 399,769 |
1212.4991 | A Physical Layer Secured Key Distribution Technique for IEEE 802.11g
Wireless Networks | Key distribution and renewing in wireless local area networks is a crucial issue to guarantee that unauthorized users are prevented from accessing the network. In this paper, we propose a technique for allowing an automatic bootstrap and periodic renewing of the network key by exploiting physical layer security principles, that is, the inherent differences among transmission channels. The proposed technique is based on scrambling of groups of consecutive packets and does not need the use of an initial authentication nor automatic repeat request protocols. We present a modification of the scrambling circuits included in the IEEE 802.11g standard which allows for a suitable error propagation at the unauthorized receiver, thus achieving physical layer security. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 20,505 |
1701.01955 | Sampled-Data Boundary Feedback Control of 1-D Parabolic PDEs | The paper provides results for the application of boundary feedback control with Zero-Order-Hold (ZOH) to 1-D linear parabolic systems on bounded domains. It is shown that the continuous-time boundary feedback applied in a sample-and-hold fashion guarantees closed-loop exponential stability, provided that the sampling period is sufficiently small. Two different continuous-time feedback designs are considered: the reduced model design and the backstepping design. The obtained results provide stability estimates for weighted 2-norms of the state and robustness with respect to perturbations of the sampling schedule is guaranteed. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 66,483 |
2306.00029 | CodeTF: One-stop Transformer Library for State-of-the-art Code LLM | Code intelligence plays a key role in transforming modern software engineering. Recently, deep learning-based models, especially Transformer-based large language models (LLMs), have demonstrated remarkable potential in tackling these tasks by leveraging massive open-source code data and programming language features. However, the development and deployment of such models often require expertise in both machine learning and software engineering, creating a barrier for the model adoption. In this paper, we present CodeTF, an open-source Transformer-based library for state-of-the-art Code LLMs and code intelligence. Following the principles of modular design and extensible framework, we design CodeTF with a unified interface to enable rapid access and development across different types of models, datasets and tasks. Our library supports a collection of pretrained Code LLM models and popular code benchmarks, including a standardized interface to train and serve code LLMs efficiently, and data features such as language-specific parsers and utility functions for extracting code attributes. In this paper, we describe the design principles, the architecture, key modules and components, and compare with other related library tools. Finally, we hope CodeTF is able to bridge the gap between machine learning/generative AI and software engineering, providing a comprehensive open-source solution for developers, researchers, and practitioners. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 369,841 |
1709.04320 | An efficient genetic algorithm for large-scale transmit power control of
dense industrial wireless networks | The industrial wireless local area network (IWLAN) is increasingly dense, not only due to the penetration of wireless applications into factories and warehouses, but also because of the rising need of redundancy for robust wireless coverage. Instead of powering on all the nodes with the maximal transmit power, it becomes an unavoidable challenge to control the transmit power of all wireless nodes on a large scale, in order to reduce interference and adapt coverage to the latest shadowing effects in the environment. Therefore, this paper proposes an efficient genetic algorithm (GA) to solve this transmit power control (TPC) problem for dense IWLANs, named GATPC. Effective population initialization, crossover and mutation, parallel computing as well as dedicated speedup measures are introduced to tailor GATPC for the large-scale optimization that is intrinsically involved in this problem. In contrast to most coverage-related optimization algorithms which cannot deal with the prevalent shadowing effects in harsh industrial indoor environments, an empirical one-slope path loss model considering three-dimensional obstacle shadowing effects is used in GATPC, in order to enable accurate yet simple coverage prediction. Experimental validation and numerical experiments in real industrial cases show the promising performance of GATPC in terms of scalability to a hyper-large scale, up to 37-times speedup in resolution runtime, and solution quality to achieve adaptive coverage and to minimize interference. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | true | 80,636 |
2212.06034 | IMDB Spoiler Dataset | User-generated reviews are often our first point of contact when we consider watching a movie or a TV show. However, beyond telling us the qualitative aspects of the media we want to consume, reviews may inevitably contain undesired revelatory information (i.e. 'spoilers') such as the surprising fate of a character in a movie, or the identity of a murderer in a crime-suspense movie, etc. In this paper, we present a high-quality movie-review based spoiler dataset to tackle the problem of spoiler detection and describe various research questions it can answer. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 335,983 |
1905.12442 | Rank-one Multi-Reference Factor Analysis | In recent years, there is a growing need for processing methods aimed at extracting useful information from large datasets. In many cases the challenge is to discover a low-dimensional structure in the data, often concealed by the existence of nuisance parameters and noise. Motivated by such challenges, we consider the problem of estimating a signal from its scaled, cyclically-shifted and noisy observations. We focus on the particularly challenging regime of low signal-to-noise ratio (SNR), where different observations cannot be shift-aligned. We show that an accurate estimation of the signal from its noisy observations is possible, and derive a procedure which is proved to consistently estimate the signal. The asymptotic sample complexity (the number of observations required to recover the signal) of the procedure is $1/\operatorname{SNR}^4$. Additionally, we propose a procedure which is experimentally shown to improve the sample complexity by a factor equal to the signal's length. Finally, we present numerical experiments which demonstrate the performance of our algorithms, and corroborate our theoretical findings. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 132,773 |
2404.13325 | Physics-Informed Neural Networks: a Plug and Play Integration into Power
System Dynamic Simulations | Time-domain simulations are crucial for ensuring power system stability and avoiding critical scenarios that could lead to blackouts. The next-generation power systems require a significant increase in the computational cost and complexity of these simulations due to additional degrees of uncertainty, non-linearity and states. Physics-Informed Neural Networks (PINN) have been shown to accelerate single-component simulations by several orders of magnitude. However, their application to current time-domain simulation solvers has been particularly challenging since the system's dynamics depend on multiple components. Using a new training formulation, this paper introduces the first natural step to integrate PINNs into multi-component time-domain simulations. We propose PINNs as an alternative to other classical numerical methods for individual components. Once trained, these neural networks approximate component dynamics more accurately for longer time steps. Formulated as an implicit and consistent method with the transient simulation workflow, PINNs speed up simulation time by significantly increasing the time steps used. For explanation clarity, we demonstrate the training, integration, and simulation framework for several combinations of PINNs and numerical solution methods using the IEEE 9-bus system, although the method applies equally well to any power system size. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 448,247 |
2309.16699 | Circular-Line Trajectory Tracking Controller for Mobile Robot using
Multi-Pixy2 Sensors | This study suggests a novel tracking method that employs three Pixy2 sensors to identify the desired line trajectories instead of traditional perceiving means. Firstly, the kinematic model of the mobile robot is derived from the information gathered by three Pixy2 sensors. Secondly, the sliding mode controller is implemented to regulate the tracking error. Finally, simulation results are analyzed to show the effectiveness of the proposed method. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 395,459 |
2204.12784 | Learn from Structural Scope: Improving Aspect-Level Sentiment Analysis
with Hybrid Graph Convolutional Networks | Aspect-level sentiment analysis aims to determine the sentiment polarity towards a specific target in a sentence. The main challenge of this task is to effectively model the relation between targets and sentiments so as to filter out noisy opinion words from irrelevant targets. Most recent efforts capture relations through target-sentiment pairs or opinion spans from a word-level or phrase-level perspective. Based on the observation that targets and sentiments essentially establish relations following the grammatical hierarchy of phrase-clause-sentence structure, it is hopeful to exploit comprehensive syntactic information for better guiding the learning process. Therefore, we introduce the concept of Scope, which outlines a structural text region related to a specific target. To jointly learn structural Scope and predict the sentiment polarity, we propose a hybrid graph convolutional network (HGCN) to synthesize information from constituency tree and dependency tree, exploring the potential of linking two syntax parsing methods to enrich the representation. Experimental results on four public datasets illustrate that our HGCN model outperforms current state-of-the-art baselines. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 293,601 |
2501.07294 | Dataset-Agnostic Recommender Systems | [This is a position paper and does not contain any empirical or theoretical results] Recommender systems have become a cornerstone of personalized user experiences, yet their development typically involves significant manual intervention, including dataset-specific feature engineering, hyperparameter tuning, and configuration. To this end, we introduce a novel paradigm: Dataset-Agnostic Recommender Systems (DAReS) that aims to enable a single codebase to autonomously adapt to various datasets without the need for fine-tuning, for a given recommender system task. Central to this approach is the Dataset Description Language (DsDL), a structured format that provides metadata about the dataset's features and labels, and allow the system to understand dataset's characteristics, allowing it to autonomously manage processes like feature selection, missing values imputation, noise removal, and hyperparameter optimization. By reducing the need for domain-specific expertise and manual adjustments, DAReS offers a more efficient and scalable solution for building recommender systems across diverse application domains. It addresses critical challenges in the field, such as reusability, reproducibility, and accessibility for non-expert users or entry-level researchers. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 524,344 |
1701.00376 | A time-variant channel prediction and feedback framework for
interference alignment | In interference channels, channel state information (CSI) can be exploited to reduce the interference signal dimensions and thus achieve the optimal capacity scaling, i.e. degrees of freedom, promised by the interference alignment technique. However, imperfect CSI, due to channel estimation error, imperfect CSI feedback and time selectivity of the channel, lead to a performance loss. In this work, we propose a novel limited feedback algorithm for single-input single-output interference alignment in time-variant channels. The feedback algorithm encodes the channel evolution in a small number of subspace coefficients, which allow for reduced-rank channel prediction to compensate for the channel estimation error due to time selectivity of the fading process and feedback delay. An upper bound for the rate loss caused by feedback quantization and channel prediction is derived. Based on this bound, we develop a dimension switching algorithm for the reduced-rank predictor to find the best tradeoff between quantization- and prediction-error. Besides, we characterize the scaling of the required number of feedback bits in order to decouple the rate loss due to channel quantization from the transmit power. Simulation results show that a rate gain over the traditional non-predictive feedback strategy can be secured and a 60% higher rate is achieved at 20 dB signal-to-noise ratio with moderate mobility. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 66,266 |
2110.07868 | FedMe: Federated Learning via Model Exchange | Federated learning is a distributed machine learning method in which a single server and multiple clients collaboratively build machine learning models without sharing datasets on clients. Numerous methods have been proposed to cope with the data heterogeneity issue in federated learning. Existing solutions require a model architecture tuned by the central server, yet a major technical challenge is that it is difficult to tune the model architecture due to the absence of local data on the central server. In this paper, we propose Federated learning via Model exchange (FedMe), which personalizes models with automatic model architecture tuning during the learning process. The novelty of FedMe lies in its learning process: clients exchange their models for model architecture tuning and model training. First, to optimize the model architectures for local data, clients tune their own personalized models by comparing to exchanged models and picking the one that yields the best performance. Second, clients train both personalized models and exchanged models by using deep mutual learning, in spite of different model architectures across the clients. We perform experiments on three real datasets and show that FedMe outperforms state-of-the-art federated learning methods while tuning model architectures automatically. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 261,161 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.