id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2403.18085 | ANOCA: AC Network-aware Optimal Curtailment Approach for Dynamic Hosting
Capacity | With exponential growth in distributed energy resources (DERs) coupled with at-capacity distribution grid infrastructure, prosumers cannot always export all extra power to the grid without violating technical limits. Consequently, a slew of dynamic hosting capacity (DHC) algorithms have emerged for optimal utilization of grid infrastructure while maximizing export from DERs. Most of these DHC algorithms utilize the concept of operating envelopes (OE), where the utility gives prosumers technical power export limits, and they are free to export power within these limits. Recent studies have shown that OE-based frameworks have drawbacks, as most develop power export limits based on convex or linear grid models. As OEs must capture extreme operating conditions, both convex and linear models can violate technical limits in practice because they approximate grid physics. However, AC models are unsuitable because they may not be feasible within the whole region of OE. We propose a new two-stage optimization framework for DHC built on three-phase AC models to address the current gaps. In this approach, the prosumers first run a receding horizon multi-period optimization to identify optimal export power setpoints to communicate with the utility. The utility then performs an infeasibility-based optimization to either accept the prosumer's request or dispatch an optimal curtail signal such that overall system technical constraints are not violated. To explore various curtailment strategies, we develop an L1, L2, and Linf norm-based dispatch algorithm with an exact three-phase AC model. We test our framework on a 1420 three-phase node meshed distribution network and show that the proposed algorithm optimally curtails DERs while guaranteeing the AC feasibility of the network. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 441,752 |
1902.10297 | Representing Formal Languages: A Comparison Between Finite Automata and
Recurrent Neural Networks | We investigate the internal representations that a recurrent neural network (RNN) uses while learning to recognize a regular formal language. Specifically, we train a RNN on positive and negative examples from a regular language, and ask if there is a simple decoding function that maps states of this RNN to states of the minimal deterministic finite automaton (MDFA) for the language. Our experiments show that such a decoding function indeed exists, and that it maps states of the RNN not to MDFA states, but to states of an {\em abstraction} obtained by clustering small sets of MDFA states into "superstates". A qualitative analysis reveals that the abstraction often has a simple interpretation. Overall, the results suggest a strong structural relationship between internal representations used by RNNs and finite automata, and explain the well-known ability of RNNs to recognize formal grammatical structure. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 122,641 |
1806.02964 | BSN: Boundary Sensitive Network for Temporal Action Proposal Generation | Temporal action proposal generation is an important yet challenging problem, since temporal proposals with rich action content are indispensable for analysing real-world videos with long duration and high proportion irrelevant content. This problem requires methods not only generating proposals with precise temporal boundaries, but also retrieving proposals to cover truth action instances with high recall and high overlap using relatively fewer proposals. To address these difficulties, we introduce an effective proposal generation method, named Boundary-Sensitive Network (BSN), which adopts "local to global" fashion. Locally, BSN first locates temporal boundaries with high probabilities, then directly combines these boundaries as proposals. Globally, with Boundary-Sensitive Proposal feature, BSN retrieves proposals by evaluating the confidence of whether a proposal contains an action within its region. We conduct experiments on two challenging datasets: ActivityNet-1.3 and THUMOS14, where BSN outperforms other state-of-the-art temporal action proposal generation methods with high recall and high temporal precision. Finally, further experiments demonstrate that by combining existing action classifiers, our method significantly improves the state-of-the-art temporal action detection performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 99,895 |
2210.14085 | Audio MFCC-gram Transformers for respiratory insufficiency detection in
COVID-19 | This work explores speech as a biomarker and investigates the detection of respiratory insufficiency (RI) by analyzing speech samples. Previous work \cite{spira2021} constructed a dataset of respiratory insufficiency COVID-19 patient utterances and analyzed it by means of a convolutional neural network achieving an accuracy of $87.04\%$, validating the hypothesis that one can detect RI through speech. Here, we study how Transformer neural network architectures can improve the performance on RI detection. This approach enables construction of an acoustic model. By choosing the correct pretraining technique, we generate a self-supervised acoustic model, leading to improved performance ($96.53\%$) of Transformers for RI detection. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 326,421 |
1306.2003 | Comparing Edge Detection Methods based on Stochastic Entropies and
Distances for PolSAR Imagery | Polarimetric synthetic aperture radar (PolSAR) has achieved a prominent position as a remote imaging method. However, PolSAR images are contaminated by speckle noise due to the coherent illumination employed during the data acquisition. This noise provides a granular aspect to the image, making its processing and analysis (such as in edge detection) hard tasks. This paper discusses seven methods for edge detection in multilook PolSAR images. In all methods, the basic idea consists in detecting transition points in the finest possible strip of data which spans two regions. The edge is contoured using the transitions points and a B-spline curve. Four stochastic distances, two differences of entropies, and the maximum likelihood criterion were used under the scaled complex Wishart distribution; the first six stem from the h-phi class of measures. The performance of the discussed detection methods was quantified and analyzed by the computational time and probability of correct edge detection, with respect to the number of looks, the backscatter matrix as a whole, the SPAN, the covariance an the spatial resolution. The detection procedures were applied to three real PolSAR images. Results provide evidence that the methods based on the Bhattacharyya distance and the difference of Shannon entropies outperform the other techniques. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 25,090 |
2111.04105 | DQRE-SCnet: A novel hybrid approach for selecting users in Federated
Learning with Deep-Q-Reinforcement Learning based on Spectral Clustering | Machine learning models based on sensitive data in the real-world promise advances in areas ranging from medical screening to disease outbreaks, agriculture, industry, defense science, and more. In many applications, learning participant communication rounds benefit from collecting their own private data sets, teaching detailed machine learning models on the real data, and sharing the benefits of using these models. Due to existing privacy and security concerns, most people avoid sensitive data sharing for training. Without each user demonstrating their local data to a central server, Federated Learning allows various parties to train a machine learning algorithm on their shared data jointly. This method of collective privacy learning results in the expense of important communication during training. Most large-scale machine-learning applications require decentralized learning based on data sets generated on various devices and places. Such datasets represent an essential obstacle to decentralized learning, as their diverse contexts contribute to significant differences in the delivery of data across devices and locations. Researchers have proposed several ways to achieve data privacy in Federated Learning systems. However, there are still challenges with homogeneous local data. This research approach is to select nodes (users) to share their data in Federated Learning for independent data-based equilibrium to improve accuracy, reduce training time, and increase convergence. Therefore, this research presents a combined Deep-QReinforcement Learning Ensemble based on Spectral Clustering called DQRE-SCnet to choose a subset of devices in each communication round. Based on the results, it has been displayed that it is possible to decrease the number of communication rounds needed in Federated Learning. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 265,388 |
2406.04481 | Optimizing Autonomous Driving for Safety: A Human-Centric Approach with
LLM-Enhanced RLHF | Reinforcement Learning from Human Feedback (RLHF) is popular in large language models (LLMs), whereas traditional Reinforcement Learning (RL) often falls short. Current autonomous driving methods typically utilize either human feedback in machine learning, including RL, or LLMs. Most feedback guides the car agent's learning process (e.g., controlling the car). RLHF is usually applied in the fine-tuning step, requiring direct human "preferences," which are not commonly used in optimizing autonomous driving models. In this research, we innovatively combine RLHF and LLMs to enhance autonomous driving safety. Training a model with human guidance from scratch is inefficient. Our framework starts with a pre-trained autonomous car agent model and implements multiple human-controlled agents, such as cars and pedestrians, to simulate real-life road environments. The autonomous car model is not directly controlled by humans. We integrate both physical and physiological feedback to fine-tune the model, optimizing this process using LLMs. This multi-agent interactive environment ensures safe, realistic interactions before real-world application. Finally, we will validate our model using data gathered from real-life testbeds located in New Jersey and New York City. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 461,696 |
2107.11678 | Deep-learning-driven Reliable Single-pixel Imaging with Uncertainty
Approximation | Single-pixel imaging (SPI) has the advantages of high-speed acquisition over a broad wavelength range and system compactness, which are difficult to achieve by conventional imaging sensors. However, a common challenge is low image quality arising from undersampling. Deep learning (DL) is an emerging and powerful tool in computational imaging for many applications and researchers have applied DL in SPI to achieve higher image quality than conventional reconstruction approaches. One outstanding challenge, however, is that the accuracy of DL predictions in SPI cannot be assessed in practical applications where the ground truths are unknown. Here, we propose the use of the Bayesian convolutional neural network (BCNN) to approximate the uncertainty (coming from finite training data and network model) of the DL predictions in SPI. Each pixel in the predicted result from BCNN represents the parameter of a probability distribution rather than the image intensity value. Then, the uncertainty can be approximated with BCNN by minimizing a negative log-likelihood loss function in the training stage and Monte Carlo dropout in the prediction stage. The results show that the BCNN can reliably approximate the uncertainty of the DL predictions in SPI with varying compression ratios and noise levels. The predicted uncertainty from BCNN in SPI reveals that most of the reconstruction errors in deep-learning-based SPI come from the edges of the image features. The results show that the proposed BCNN can provide a reliable tool to approximate the uncertainty of DL predictions in SPI and can be widely used in many applications of SPI. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 247,659 |
1407.0166 | Simultaneous Wireless Information and Power Transfer for Two-hop OFDM
Relay System | This paper investigates the simultaneous wireless information and power transfer (SWIPT) for two-hop orthogonal frequency division multiplexing (OFDM) decode-and-forward (DF) relay communication system, where a relay harvests energy from radio frequency signals transmitted by the source and then uses the harvested energy to assist the information transmission from the source to its destination. The power splitting receiver is considered at the relay, which splits the received signal into two power streams to perform information decoding (ID) and energy harvesting (EH) respectively. For better understanding the behavior and exploring the performance limit of such a system, resource allocation is studied to maximize the total achievable transmission rate. An optimization problem, which jointly takes into account the power allocation, the subcarrier pairing and the power splitting, is formulated. Due to its non-convexity, a resource allocation policy with low complexity based on separation principle is designed. Simulation results show that the system performance can be significantly improved by using our proposed policy. Moreover, the system performance behavior to the relay position is also discussed, and results show that in the two-hop OFDM system with EH relay, the relay should be deployed near the source, while in that with conventional non-EH relay, it should be deployed at the middle between the source and the destination. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 34,307 |
1903.03659 | A highly parallel multilevel Newton-Krylov-Schwarz method with
subspace-based coarsening and partition-based balancing for the multigroup
neutron transport equations on 3D unstructured meshes | The multigroup neutron transport equations have been widely used to study the motion of neutrons and their interactions with the background materials. Numerical simulation of the multigroup neutron transport equations is computationally challenging because the equations is defined on a high dimensional phase space (1D in energy, 2D in angle, and 3D in spatial space), and furthermore, for realistic applications, the computational spatial domain is complex and the materials are heterogeneous. The multilevel domain decomposition methods is one of the most popular algorithms for solving the multigroup neutron transport equations, but the construction of coarse spaces is expensive and often not strongly scalable when the number of processor cores is large. In this paper, we study a highly parallel multilevel Newton-Krylov-Schwarz method equipped with several novel components, such as subspace-based coarsening, partition-based balancing and hierarchical mesh partitioning, that enable the overall simulation strongly scalable in terms of the compute time. Compared with the traditional coarsening method, the subspace-based coarsening algorithm significantly reduces the cost of the preconditioner setup that is often unscalable. In addition, the partition-based balancing strategy enhances the parallel efficiency of the overall solver by assigning a nearly-equal amount of work to each processor core. The hierarchical mesh partitioning is able to generate a large number of subdomains and meanwhile minimizes the off-node communication. We numerically show that the proposed algorithm is scalable with more than 10,000 processor cores for a realistic application with a few billions unknowns on 3D unstructured meshes. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 123,780 |
2008.00097 | Backpropagation through Signal Temporal Logic Specifications: Infusing
Logical Structure into Gradient-Based Methods | This paper presents a technique, named STLCG, to compute the quantitative semantics of Signal Temporal Logic (STL) formulas using computation graphs. STLCG provides a platform which enables the incorporation of logical specifications into robotics problems that benefit from gradient-based solutions. Specifically, STL is a powerful and expressive formal language that can specify spatial and temporal properties of signals generated by both continuous and hybrid systems. The quantitative semantics of STL provide a robustness metric, i.e., how much a signal satisfies or violates an STL specification. In this work, we devise a systematic methodology for translating STL robustness formulas into computation graphs. With this representation, and by leveraging off-the-shelf automatic differentiation tools, we are able to efficiently backpropagate through STL robustness formulas and hence enable a natural and easy-to-use integration of STL specifications with many gradient-based approaches used in robotics. Through a number of examples stemming from various robotics applications, we demonstrate that STLCG is versatile, computationally efficient, and capable of incorporating human-domain knowledge into the problem formulation. | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | true | 189,896 |
1901.10550 | Personalized Treatment Selection using Causal Heterogeneity | Randomized experimentation (also known as A/B testing or bucket testing) is widely used in the internet industry to measure the metric impact obtained by different treatment variants. A/B tests identify the treatment variant showing the best performance, which then becomes the chosen or selected treatment for the entire population. However, the effect of a given treatment can differ across experimental units and a personalized approach for treatment selection can greatly improve upon the usual global selection strategy. In this work, we develop a framework for personalization through (i) estimation of heterogeneous treatment effect at either a cohort or member-level, followed by (ii) selection of optimal treatment variants for cohorts (or members) obtained through (deterministic or stochastic) constrained optimization. We perform a two-fold evaluation of our proposed methods. First, a simulation analysis is conducted to study the effect of personalized treatment selection under carefully controlled settings. This simulation illustrates the differences between the proposed methods and the suitability of each with increasing uncertainty. We also demonstrate the effectiveness of the method through a real-life example related to serving notifications at Linkedin. The solution significantly outperformed both heuristic solutions and the global treatment selection baseline leading to a sizable win on top-line metrics like member visits. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 120,045 |
2402.03038 | Automatic Combination of Sample Selection Strategies for Few-Shot
Learning | In few-shot learning, such as meta-learning, few-shot fine-tuning or in-context learning, the limited number of samples used to train a model have a significant impact on the overall success. Although a large number of sample selection strategies exist, their impact on the performance of few-shot learning is not extensively known, as most of them have been so far evaluated in typical supervised settings only. In this paper, we thoroughly investigate the impact of 20 sample selection strategies on the performance of 5 few-shot learning approaches over 8 image and 6 text datasets. In addition, we propose a new method for automatic combination of sample selection strategies (ACSESS) that leverages the strengths and complementary information of the individual strategies. The experimental results show that our method consistently outperforms the individual selection strategies, as well as the recently proposed method for selecting support examples for in-context learning. We also show a strong modality, dataset and approach dependence for the majority of strategies as well as their dependence on the number of shots - demonstrating that the sample selection strategies play a significant role for lower number of shots, but regresses to random selection at higher number of shots. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 426,833 |
1707.06122 | Twitter Activity Timeline as a Signature of Urban Neighborhood | Modern cities are complex systems, evolving at a fast pace. Thus, many urban planning, political, and economic decisions require a deep and up-to-date understanding of the local context of urban neighborhoods. This study shows that the structure of openly available social media records, such as Twitter, offers a possibility for building a unique dynamic signature of urban neighborhood function, and, therefore, might be used as an efficient and simple decision support tool. Considering New York City as an example, we investigate how Twitter data can be used to decompose the urban landscape into self-defining zones, aligned with the functional properties of individual neighborhoods and their social and economic characteristics. We further explore the potential of these data for detecting events and evaluating their impact over time and space. This approach paves a way to a methodology for immediate quantification of the impact of urban development programs and the estimation of socioeconomic statistics at a finer spatial-temporal scale, thus allowing urban policy-makers to track neighborhood transformations and foresee undesirable changes in order to take early action before official statistics would be available. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 77,354 |
2011.08683 | Fisher Information of a Family of Generalized Normal Distributions | In this brief note we compute the Fisher information of a family of generalized normal distributions. Fisher information is usually defined for regular distributions, i.e. continuously differentiable (log) density functions whose support does not depend on the family parameter $\theta$. Although the uniform distribution in $[-\theta, + \theta]$ does not satisfy the regularity requirements, as a special case of our result, we will obtain the Fisher information for this family. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 206,958 |
1801.04686 | Hierarchical Coding for Distributed Computing | Coding for distributed computing supports low-latency computation by relieving the burden of straggling workers. While most existing works assume a simple master-worker model, we consider a hierarchical computational structure consisting of groups of workers, motivated by the need to reflect the architectures of real-world distributed computing systems. In this work, we propose a hierarchical coding scheme for this model, as well as analyze its decoding cost and expected computation time. Specifically, we first provide upper and lower bounds on the expected computing time of the proposed scheme. We also show that our scheme enables efficient parallel decoding, thus reducing decoding costs by orders of magnitude over non-hierarchical schemes. When considering both decoding cost and computing time, the proposed hierarchical coding is shown to outperform existing schemes in many practical scenarios. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 88,327 |
0802.2451 | Capacity of General Discrete Noiseless Channels | This paper concerns the capacity of the discrete noiseless channel introduced by Shannon. A sufficient condition is given for the capacity to be well-defined. For a general discrete noiseless channel allowing non-integer valued symbol weights, it is shown that the capacity--if well-defined--can be determined from the radius of convergence of its generating function, from the smallest positive pole of its generating function, or from the rightmost real singularity of its complex generating function. A generalisation is given for Pringsheim's Theorem and for the Exponential Growth Formula to generating functions of combinatorial structures with non-integer valued symbol weights. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 1,304 |
2004.06772 | Channel Hardening in Massive MIMO: Model Parameters and Experimental
Assessment | Reliability is becoming increasingly important for many applications envisioned for future wireless systems. A technology that could improve reliability in these systems is massive MIMO (Multiple-Input Multiple-Output). One reason for this is a phenomenon called channel hardening, which means that as the number of antennas in the system increases, the variations of channel gain decrease in both the time- and frequency domain. Our analysis of channel hardening is based on a joint comparison of theory, measurements and simulations. Data from measurement campaigns including both indoor and outdoor scenarios, as well as cylindrical and planar base station arrays, are analyzed. The simulation analysis includes a comparison with the COST 2100 channel model with its massive MIMO extension. The conclusion is that the COST 2100 model is well suited to represent real scenarios, and provides a reasonable match to actual measurements up to the uncertainty of antenna patterns and user interaction. Also, the channel hardening effect in practical massive MIMO channels is less pronounced than in complex independent and identically distributed (i.i.d.) Gaussian channels, which are often considered in theoretical work. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 172,596 |
2205.06127 | Sample Complexity Bounds for Robustly Learning Decision Lists against
Evasion Attacks | A fundamental problem in adversarial machine learning is to quantify how much training data is needed in the presence of evasion attacks. In this paper we address this issue within the framework of PAC learning, focusing on the class of decision lists. Given that distributional assumptions are essential in the adversarial setting, we work with probability distributions on the input data that satisfy a Lipschitz condition: nearby points have similar probability. Our key results illustrate that the adversary's budget (that is, the number of bits it can perturb on each input) is a fundamental quantity in determining the sample complexity of robust learning. Our first main result is a sample-complexity lower bound: the class of monotone conjunctions (essentially the simplest non-trivial hypothesis class on the Boolean hypercube) and any superclass has sample complexity at least exponential in the adversary's budget. Our second main result is a corresponding upper bound: for every fixed $k$ the class of $k$-decision lists has polynomial sample complexity against a $\log(n)$-bounded adversary. This sheds further light on the question of whether an efficient PAC learning algorithm can always be used as an efficient $\log(n)$-robust learning algorithm under the uniform distribution. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 296,145 |
2308.04792 | NNPP: A Learning-Based Heuristic Model for Accelerating Optimal Path
Planning on Uneven Terrain | Intelligent autonomous path planning is essential for enhancing the exploration efficiency of mobile robots operating in uneven terrains like planetary surfaces and off-road environments.In this paper, we propose the NNPP model for computing the heuristic region, enabling foundation algorithms like Astar to find the optimal path solely within this reduced search space, effectively decreasing the search time. The NNPP model learns semantic information about start and goal locations, as well as map representations, from numerous pre-annotated optimal path demonstrations, and produces a probabilistic distribution over each pixel representing the likelihood of it belonging to an optimal path on the map. More specifically, the paper computes the traversal cost for each grid cell from the slope, roughness and elevation difference obtained from the digital elevation model. Subsequently, the start and goal locations are encoded using a Gaussian distribution and different location encoding parameters are analyzed for their effect on model performance. After training, the NNPP model is able to \textcolor{revision}{accelerate} path planning on novel maps. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 384,560 |
2501.08312 | Everybody Likes to Sleep: A Computer-Assisted Comparison of Object
Naming Data from 30 Languages | Object naming - the act of identifying an object with a word or a phrase - is a fundamental skill in interpersonal communication, relevant to many disciplines, such as psycholinguistics, cognitive linguistics, or language and vision research. Object naming datasets, which consist of concept lists with picture pairings, are used to gain insights into how humans access and select names for objects in their surroundings and to study the cognitive processes involved in converting visual stimuli into semantic concepts. Unfortunately, object naming datasets often lack transparency and have a highly idiosyncratic structure. Our study tries to make current object naming data transparent and comparable by using a multilingual, computer-assisted approach that links individual items of object naming lists to unified concepts. Our current sample links 17 object naming datasets that cover 30 languages from 10 different language families. We illustrate how the comparative dataset can be explored by searching for concepts that recur across the majority of datasets and comparing the conceptual spaces of covered object naming datasets with classical basic vocabulary lists from historical linguistics and linguistic typology. Our findings can serve as a basis for enhancing cross-linguistic object naming research and as a guideline for future studies dealing with object naming tasks. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 524,712 |
2407.07890 | Training on the Test Task Confounds Evaluation and Emergence | We study a fundamental problem in the evaluation of large language models that we call training on the test task. Unlike wrongful practices like training on the test data, leakage, or data contamination, training on the test task is not a malpractice. Rather, the term describes a growing set of practices that utilize knowledge about evaluation tasks at training time. We demonstrate that training on the test task confounds both relative model evaluations and claims about emergent capabilities. We argue that the seeming superiority of one model family over another may be explained by a different degree of training on the test task. To this end, we propose an effective method to adjust for the effect of training on the test task on benchmark evaluations. Put simply, to fine-tune each model under comparison on the same task-relevant data before evaluation. We then show that instances of emergent behavior disappear gradually as models train on the test task. Our work promotes a new perspective on the evaluation of large language models with broad implications for benchmarking and the study of emergent capabilities | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 471,939 |
1511.09209 | Fine-Grained Classification via Mixture of Deep Convolutional Neural
Networks | We present a novel deep convolutional neural network (DCNN) system for fine-grained image classification, called a mixture of DCNNs (MixDCNN). The fine-grained image classification problem is characterised by large intra-class variations and small inter-class variations. To overcome these problems our proposed MixDCNN system partitions images into K subsets of similar images and learns an expert DCNN for each subset. The output from each of the K DCNNs is combined to form a single classification decision. In contrast to previous techniques, we provide a formulation to perform joint end-to-end training of the K DCNNs simultaneously. Extensive experiments, on three datasets using two network structures (AlexNet and GoogLeNet), show that the proposed MixDCNN system consistently outperforms other methods. It provides a relative improvement of 12.7% and achieves state-of-the-art results on two datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 49,647 |
2012.02224 | Personality-Driven Gaze Animation with Conditional Generative
Adversarial Networks | We present a generative adversarial learning approach to synthesize gaze behavior of a given personality. We train the model using an existing data set that comprises eye-tracking data and personality traits of 42 participants performing an everyday task. Given the values of Big-Five personality traits (openness, conscientiousness, extroversion, agreeableness, and neuroticism), our model generates time series data consisting of gaze target, blinking times, and pupil dimensions. We use the generated data to synthesize the gaze motion of virtual agents on a game engine. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 209,691 |
2502.06599 | Joint parameter and state estimation for regularized time-discrete
multibody dynamics | We develop a method for offline parameter estimation of discrete multibody dynamics with regularized and frictional kinematic constraints. This setting leads to unobserved degrees of freedom, which we handle using joint state and parameter estimation. Our method finds the states and parameters as the solution to a nonlinear least squares optimization problem based on the inverse dynamics and the observation error. The solution is found using a Levenberg-Marquardt algorithm with derivatives from automatic differentiation and custom differentiation rules for the complementary conditions that appear due to dry frictional constraints. We reduce the number of method parameters to the choice of the time-step, regularization coefficients, and a parameter that controls the relative weighting of inverse dynamics and observation errors. We evaluate the method using synthetic and real measured data, focusing on performance and sensitivity to method parameters. In particular, we optimize over a 13-dimensional parameter space, including inertial, frictional, tilt, and motor parameters, using data from a real Furuta pendulum. Results show fast convergence, in the order of seconds, and good agreement for different time-series of recorded data over multiple method parameter choices. However, very stiff constraints may cause difficulties in solving the optimization problem. We conclude that our method can be very fast and has method parameters that are robust and easy to set in the tested scenarios. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 532,146 |
2211.11593 | Investigating methods to improve photovoltaic thermal models at
second-to-minute timescales | This paper presents a range of methods to improve the accuracy of equation-based thermal models of PV modules at second-to-minute timescales. We present an RC-equivalent conceptual model for PV modules, where wind effects are captured. We show how the thermal time constant $\tau$ of PV modules can be determined from measured data, and subsequently used to make static thermal models dynamic by applying the Exponential Weighted Mean (EWM) approach to irradiance and wind signals. On average, $\tau$ is $6.3 \pm 1~$min for fixed-mount PV systems. Based on this conceptual model, the Filter- EWM - Mean Bias Error correction (FEM) methodology is developed. We propose two thermal models, WM1 and WM2, and compare these against the models of Ross, Sandia, and Faiman on twenty-four datasets of fifteen sites, with time resolutions ranging from 1$~$s to 1$~$h, the majority of these at 1$~$min resolution. The FEM methodology is shown to reduce model errors (RMSE and MAE) on average for all sites and models versus the standard steady-state equivalent by -1.1$~$K and -0.75$~$K respectively. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 331,798 |
1911.12249 | Literature Review of Action Recognition in the Wild | The literature review presented below on Action Recognition in the wild is the in-depth study of Research Papers. Action Recognition problem in the untrimmed videos is a challenging task and most of the papers have tackled this problem using hand-crafted features with shallow learning techniques and sophisticated end-to-end deep learning techniques. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 155,346 |
2102.09109 | Understanding and Creating Art with AI: Review and Outlook | Technologies related to artificial intelligence (AI) have a strong impact on the changes of research and creative practices in visual arts. The growing number of research initiatives and creative applications that emerge in the intersection of AI and art, motivates us to examine and discuss the creative and explorative potentials of AI technologies in the context of art. This paper provides an integrated review of two facets of AI and art: 1) AI is used for art analysis and employed on digitized artwork collections; 2) AI is used for creative purposes and generating novel artworks. In the context of AI-related research for art understanding, we present a comprehensive overview of artwork datasets and recent works that address a variety of tasks such as classification, object detection, similarity retrieval, multimodal representations, computational aesthetics, etc. In relation to the role of AI in creating art, we address various practical and theoretical aspects of AI Art and consolidate related works that deal with those topics in detail. Finally, we provide a concise outlook on the future progression and potential impact of AI technologies on our understanding and creation of art. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | true | 220,667 |
1711.08752 | A Survey on Network Embedding | Network embedding assigns nodes in a network to low-dimensional representations and effectively preserves the network structure. Recently, a significant amount of progresses have been made toward this emerging network analysis paradigm. In this survey, we focus on categorizing and then reviewing the current development on network embedding methods, and point out its future research directions. We first summarize the motivation of network embedding. We discuss the classical graph embedding algorithms and their relationship with network embedding. Afterwards and primarily, we provide a comprehensive overview of a large number of network embedding methods in a systematic manner, covering the structure- and property-preserving network embedding methods, the network embedding methods with side information and the advanced information preserving network embedding methods. Moreover, several evaluation approaches for network embedding and some useful online resources, including the network data sets and softwares, are reviewed, too. Finally, we discuss the framework of exploiting these network embedding methods to build an effective system and point out some potential future directions. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 85,261 |
1908.07613 | Implications of Quantum Computing for Artificial Intelligence alignment
research | We explain some key features of quantum computing via three heuristics and apply them to argue that a deep understanding of quantum computing is unlikely to be helpful to address current bottlenecks in Artificial Intelligence Alignment. Our argument relies on the claims that Quantum Computing leads to compute overhang instead of algorithmic overhang, and that the difficulties associated with the measurement of quantum states do not invalidate any major assumptions of current Artificial Intelligence Alignment research agendas. We also discuss tripwiring, adversarial blinding, informed oversight and side effects as possible exceptions. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | true | 142,324 |
1711.00609 | Security Against Impersonation Attacks in Distributed Systems | In a multi-agent system, transitioning from a centralized to a distributed decision-making strategy can introduce vulnerability to adversarial manipulation. We study the potential for adversarial manipulation in a class of graphical coordination games where the adversary can pose as a friendly agent in the game, thereby influencing the decision-making rules of a subset of agents. The adversary's influence can cascade throughout the system, indirectly influencing other agents' behavior and significantly impacting the emergent collective behavior. The main results in this paper focus on characterizing conditions under which the adversary's local influence can dramatically impact the emergent global behavior, e.g., destabilize efficient Nash equilibria. | false | false | false | false | false | false | false | false | false | false | true | false | true | false | true | false | false | true | 83,748 |
2405.18507 | Injecting Hierarchical Biological Priors into Graph Neural Networks for
Flow Cytometry Prediction | In the complex landscape of hematologic samples such as peripheral blood or bone marrow derived from flow cytometry (FC) data, cell-level prediction presents profound challenges. This work explores injecting hierarchical prior knowledge into graph neural networks (GNNs) for single-cell multi-class classification of tabular cellular data. By representing the data as graphs and encoding hierarchical relationships between classes, we propose our hierarchical plug-in method to be applied to several GNN models, namely, FCHC-GNN, and effectively designed to capture neighborhood information crucial for single-cell FC domain. Extensive experiments on our cohort of 19 distinct patients, demonstrate that incorporating hierarchical biological constraints boosts performance significantly across multiple metrics compared to baseline GNNs without such priors. The proposed approach highlights the importance of structured inductive biases for gaining improved generalization in complex biological prediction tasks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 458,450 |
2210.17505 | Space-Fluid Adaptive Sampling by Self-Organisation | A recurrent task in coordinated systems is managing (estimating, predicting, or controlling) signals that vary in space, such as distributed sensed data or computation outcomes. Especially in large-scale settings, the problem can be addressed through decentralised and situated computing systems: nodes can locally sense, process, and act upon signals, and coordinate with neighbours to implement collective strategies. Accordingly, in this work we devise distributed coordination strategies for the estimation of a spatial phenomenon through collaborative adaptive sampling. Our design is based on the idea of dynamically partitioning space into regions that compete and grow/shrink to provide accurate aggregate sampling. Such regions hence define a sort of virtualised space that is "fluid", since its structure adapts in response to pressure forces exerted by the underlying phenomenon. We provide an adaptive sampling algorithm in the field-based coordination framework, and prove it is self-stabilising and locally optimal. Finally, we verify by simulation that the proposed algorithm effectively carries out a spatially adaptive sampling while maintaining a tuneable trade-off between accuracy and efficiency. | false | false | false | false | true | false | false | false | false | false | true | false | false | false | true | false | false | true | 327,716 |
2402.08472 | Large Language Models for the Automated Analysis of Optimization
Algorithms | The ability of Large Language Models (LLMs) to generate high-quality text and code has fuelled their rise in popularity. In this paper, we aim to demonstrate the potential of LLMs within the realm of optimization algorithms by integrating them into STNWeb. This is a web-based tool for the generation of Search Trajectory Networks (STNs), which are visualizations of optimization algorithm behavior. Although visualizations produced by STNWeb can be very informative for algorithm designers, they often require a certain level of prior knowledge to be interpreted. In an attempt to bridge this knowledge gap, we have incorporated LLMs, specifically GPT-4, into STNWeb to produce extensive written reports, complemented by automatically generated plots, thereby enhancing the user experience and reducing the barriers to the adoption of this tool by the research community. Moreover, our approach can be expanded to other tools from the optimization community, showcasing the versatility and potential of LLMs in this field. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 429,111 |
2302.06556 | VA-DepthNet: A Variational Approach to Single Image Depth Prediction | We introduce VA-DepthNet, a simple, effective, and accurate deep neural network approach for the single-image depth prediction (SIDP) problem. The proposed approach advocates using classical first-order variational constraints for this problem. While state-of-the-art deep neural network methods for SIDP learn the scene depth from images in a supervised setting, they often overlook the invaluable invariances and priors in the rigid scene space, such as the regularity of the scene. The paper's main contribution is to reveal the benefit of classical and well-founded variational constraints in the neural network design for the SIDP task. It is shown that imposing first-order variational constraints in the scene space together with popular encoder-decoder-based network architecture design provides excellent results for the supervised SIDP task. The imposed first-order variational constraint makes the network aware of the depth gradient in the scene space, i.e., regularity. The paper demonstrates the usefulness of the proposed approach via extensive evaluation and ablation analysis over several benchmark datasets, such as KITTI, NYU Depth V2, and SUN RGB-D. The VA-DepthNet at test time shows considerable improvements in depth prediction accuracy compared to the prior art and is accurate also at high-frequency regions in the scene space. At the time of writing this paper, our method -- labeled as VA-DepthNet, when tested on the KITTI depth-prediction evaluation set benchmarks, shows state-of-the-art results, and is the top-performing published approach. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 345,445 |
2102.04209 | Guilty Artificial Minds | The concepts of blameworthiness and wrongness are of fundamental importance in human moral life. But to what extent are humans disposed to blame artificially intelligent agents, and to what extent will they judge their actions to be morally wrong? To make progress on these questions, we adopted two novel strategies. First, we break down attributions of blame and wrongness into more basic judgments about the epistemic and conative state of the agent, and the consequences of the agent's actions. In this way, we are able to examine any differences between the way participants treat artificial agents in terms of differences in these more basic judgments. our second strategy is to compare attributions of blame and wrongness across human, artificial, and group agents (corporations). Others have compared attributions of blame and wrongness between human and artificial agents, but the addition of group agents is significant because these agents seem to provide a clear middle-ground between human agents (for whom the notions of blame and wrongness were created) and artificial agents (for whom the question remains open). | true | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 219,026 |
1107.0045 | Graduality in Argumentation | Argumentation is based on the exchange and valuation of interacting arguments, followed by the selection of the most acceptable of them (for example, in order to take a decision, to make a choice). Starting from the framework proposed by Dung in 1995, our purpose is to introduce 'graduality' in the selection of the best arguments, i.e., to be able to partition the set of the arguments in more than the two usual subsets of 'selected' and 'non-selected' arguments in order to represent different levels of selection. Our basic idea is that an argument is all the more acceptable if it can be preferred to its attackers. First, we discuss general principles underlying a 'gradual' valuation of arguments based on their interactions. Following these principles, we define several valuation models for an abstract argumentation system. Then, we introduce 'graduality' in the concept of acceptability of arguments. We propose new acceptability classes and a refinement of existing classes taking advantage of an available 'gradual' valuation. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 11,116 |
2307.02002 | Interpretable and Secure Trajectory Optimization for UAV-Assisted
Communication | Unmanned aerial vehicles (UAVs) have gained popularity due to their flexible mobility, on-demand deployment, and the ability to establish high probability line-of-sight wireless communication. As a result, UAVs have been extensively used as aerial base stations (ABSs) to supplement ground-based cellular networks for various applications. However, existing UAV-assisted communication schemes mainly focus on trajectory optimization and power allocation, while ignoring the issue of collision avoidance during UAV flight. To address this issue, this paper proposes an interpretable UAV-assisted communication scheme that decomposes reliable UAV services into two sub-problems. The first is the constrained UAV coordinates and power allocation problem, which is solved using the Dueling Double DQN (D3QN) method. The second is the constrained UAV collision avoidance and trajectory optimization problem, which is addressed through the Monte Carlo tree search (MCTS) method. This approach ensures both reliable and efficient operation of UAVs. Moreover, we propose a scalable interpretable artificial intelligence (XAI) framework that enables more transparent and reliable system decisions. The proposed scheme's interpretability generates explainable and trustworthy results, making it easier to comprehend, validate, and control UAV-assisted communication solutions. Through extensive experiments, we demonstrate that our proposed algorithm outperforms existing techniques in terms of performance and generalization. The proposed model improves the reliability, efficiency, and safety of UAV-assisted communication systems, making it a promising solution for future UAV-assisted communication applications | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 377,546 |
1902.10280 | A New Algorithm for Improved Blind Detection of Polar Coded PDCCH in 5G
New Radio | In recent release of the new cellular standard known as 5G New Radio (5G-NR), the physical downlink control channel (PDCCH) has adopted polar codes for error protection. Similar to 4G-LTE, each active user equipment (UE) must blindly detect its own PDCCH in the downlink search space. This work investigates new ways to improve the accuracy of PDCCH blind detection in 5G-NR. We develop a novel design of joint detection and decoding receiver for 5G multiple-input multiple-output (MIMO) transceivers. We aim to achieve robustness against practical obstacles including channel state information (CSI) errors, noise, co-channel interferences, and pilot contamination. To optimize the overall receiver performance in PDCCH blind detection, we incorporate the polar code information during the signal detection stage by relaxing and transforming the Galois field code constraints into the complex signal field. Specifically, we develop a novel joint linear programming (LP) formulation that takes into consideration the transformed polar code constraints. Our proposed joint LP formulation can also be integrated with polar decoders to deliver superior receiver performance at low cost. We further introduce a metric that can be used to eliminate most of wrong PDCCH candidates to improve the computational efficiency of PDCCH blind detection for 5G-NR. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 122,636 |
2311.07766 | Vision-Language Integration in Multimodal Video Transformers (Partially)
Aligns with the Brain | Integrating information from multiple modalities is arguably one of the essential prerequisites for grounding artificial intelligence systems with an understanding of the real world. Recent advances in video transformers that jointly learn from vision, text, and sound over time have made some progress toward this goal, but the degree to which these models integrate information from modalities still remains unclear. In this work, we present a promising approach for probing a pre-trained multimodal video transformer model by leveraging neuroscientific evidence of multimodal information processing in the brain. Using brain recordings of participants watching a popular TV show, we analyze the effects of multi-modal connections and interactions in a pre-trained multi-modal video transformer on the alignment with uni- and multi-modal brain regions. We find evidence that vision enhances masked prediction performance during language processing, providing support that cross-modal representations in models can benefit individual modalities. However, we don't find evidence of brain-relevant information captured by the joint multi-modal transformer representations beyond that captured by all of the individual modalities. We finally show that the brain alignment of the pre-trained joint representation can be improved by fine-tuning using a task that requires vision-language inferences. Overall, our results paint an optimistic picture of the ability of multi-modal transformers to integrate vision and language in partially brain-relevant ways but also show that improving the brain alignment of these models may require new approaches. | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | false | 407,460 |
2411.15130 | Learning-based Trajectory Tracking for Bird-inspired Flapping-Wing
Robots | Bird-sized flapping-wing robots offer significant potential for agile flight in complex environments, but achieving agile and robust trajectory tracking remains a challenge due to the complex aerodynamics and highly nonlinear dynamics inherent in flapping-wing flight. In this work, a learning-based control approach is introduced to unlock the versatility and adaptiveness of flapping-wing flight. We propose a model-free reinforcement learning (RL)-based framework for a high degree-of-freedom (DoF) bird-inspired flapping-wing robot that allows for multimodal flight and agile trajectory tracking. Stability analysis was performed on the closed-loop system comprising of the flapping-wing system and the RL policy. Additionally, simulation results demonstrate that the RL-based controller can successfully learn complex wing trajectory patterns, achieve stable flight, switch between flight modes spontaneously, and track different trajectories under various aerodynamic conditions. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 510,452 |
2304.04640 | NeuroBench: A Framework for Benchmarking Neuromorphic Computing
Algorithms and Systems | Neuromorphic computing shows promise for advancing computing efficiency and capabilities of AI applications using brain-inspired principles. However, the neuromorphic research field currently lacks standardized benchmarks, making it difficult to accurately measure technological advancements, compare performance with conventional methods, and identify promising future research directions. Prior neuromorphic computing benchmark efforts have not seen widespread adoption due to a lack of inclusive, actionable, and iterative benchmark design and guidelines. To address these shortcomings, we present NeuroBench: a benchmark framework for neuromorphic computing algorithms and systems. NeuroBench is a collaboratively-designed effort from an open community of researchers across industry and academia, aiming to provide a representative structure for standardizing the evaluation of neuromorphic approaches. The NeuroBench framework introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent (algorithm track) and hardware-dependent (system track) settings. In this article, we outline tasks and guidelines for benchmarks across multiple application domains, and present initial performance baselines across neuromorphic and conventional approaches for both benchmark tracks. NeuroBench is intended to continually expand its benchmarks and features to foster and track the progress made by the research community. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 357,297 |
1905.13132 | Content based News Recommendation via Shortest Entity Distance over
Knowledge Graphs | Content-based news recommendation systems need to recommend news articles based on the topics and content of articles without using user specific information. Many news articles describe the occurrence of specific events and named entities including people, places or objects. In this paper, we propose a graph traversal algorithm as well as a novel weighting scheme for cold-start content based news recommendation utilizing these named entities. Seeking to create a higher degree of user-specific relevance, our algorithm computes the shortest distance between named entities, across news articles, over a large knowledge graph. Moreover, we have created a new human annotated data set for evaluating content based news recommendation systems. Experimental results show our method is suitable to tackle the hard cold-start problem and it produces stronger Pearson correlation to human similarity scores than other cold-start methods. Our method is also complementary and a combination with the conventional cold-start recommendation methods may yield significant performance gains. The dataset, CNRec, is available at: https://github.com/kevinj22/CNRec | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 133,001 |
2109.02385 | Printed Texts Tracking and Following for a Finger-Wearable
Electro-Braille System Through Opto-electrotactile Feedback | This paper presents our recent development on a portable and refreshable text reading and sensory substitution system for the blind or visually impaired (BVI), called Finger-eye. The system mainly consists of an opto-text processing unit and a compact electro-tactile based display that can deliver text-related electrical signals to the fingertip skin through a wearable and Braille-dot patterned electrode array and thus delivers the electro-stimulation based Braille touch sensations to the fingertip. To achieve the goal of aiding BVI to read any text not written in Braille through this portable system, in this work, a Rapid Optical Character Recognition (R-OCR) method is firstly developed for real-time processing text information based on a Fisheye imaging device mounted at the finger-wearable electro-tactile display. This allows real-time translation of printed text to electro-Braille along with natural movement of user's fingertip as if reading any Braille display or book. More importantly, an electro-tactile neuro-stimulation feedback mechanism is proposed and incorporated with the R-OCR method, which facilitates a new opto-electrotactile feedback based text line tracking control approach that enables text line following by user fingertip during reading. Multiple experiments were designed and conducted to test the ability of blindfolded participants to read through and follow the text line based on the opto-electrotactile-feedback method. The experiments show that as the result of the opto-electrotactile-feedback, the users were able to maintain their fingertip within a $2mm$ distance of the text while scanning a text line. This research is a significant step to aid the BVI users with a portable means to translate and follow to read any printed text to Braille, whether in the digital realm or physically, on any surface. | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 253,731 |
2409.16073 | Open-World Object Detection with Instance Representation Learning | While humans naturally identify novel objects and understand their relationships, deep learning-based object detectors struggle to detect and relate objects that are not observed during training. To overcome this issue, Open World Object Detection(OWOD) has been introduced to enable models to detect unknown objects in open-world scenarios. However, OWOD methods fail to capture the fine-grained relationships between detected objects, which are crucial for comprehensive scene understanding and applications such as class discovery and tracking. In this paper, we propose a method to train an object detector that can both detect novel objects and extract semantically rich features in open-world conditions by leveraging the knowledge of Vision Foundation Models(VFM). We first utilize the semantic masks from the Segment Anything Model to supervise the box regression of unknown objects, ensuring accurate localization. By transferring the instance-wise similarities obtained from the VFM features to the detector's instance embeddings, our method then learns a semantically rich feature space of these embeddings. Extensive experiments show that our method learns a robust and generalizable feature space, outperforming other OWOD-based feature extraction methods. Additionally, we demonstrate that the enhanced feature from our model increases the detector's applicability to tasks such as open-world tracking. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 491,193 |
2412.04078 | Mind the Gap: Towards Generalizable Autonomous Penetration Testing via
Domain Randomization and Meta-Reinforcement Learning | With increasing numbers of vulnerabilities exposed on the internet, autonomous penetration testing (pentesting) has emerged as a promising research area. Reinforcement learning (RL) is a natural fit for studying this topic. However, two key challenges limit the applicability of RL-based autonomous pentesting in real-world scenarios: (a) training environment dilemma -- training agents in simulated environments is sample-efficient while ensuring their realism remains challenging; (b) poor generalization ability -- agents' policies often perform poorly when transferred to unseen scenarios, with even slight changes potentially causing significant generalization gap. To this end, we propose GAP, a generalizable autonomous pentesting framework that aims to realizes efficient policy training in realistic environments and train generalizable agents capable of drawing inferences about other cases from one instance. GAP introduces a Real-to-Sim-to-Real pipeline that (a) enables end-to-end policy learning in unknown real environments while constructing realistic simulations; (b) improves agents' generalization ability by leveraging domain randomization and meta-RL learning.Specially, we are among the first to apply domain randomization in autonomous pentesting and propose a large language model-powered domain randomization method for synthetic environment generation. We further apply meta-RL to improve agents' generalization ability in unseen environments by leveraging synthetic environments. The combination of two methods effectively bridges the generalization gap and improves agents' policy adaptation performance.Experiments are conducted on various vulnerable virtual machines, with results showing that GAP can enable policy learning in various realistic environments, achieve zero-shot policy transfer in similar environments, and realize rapid policy adaptation in dissimilar environments. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 514,249 |
1108.4548 | Ant Colony Optimization of Rough Set for HV Bushings Fault Detection | Most transformer failures are attributed to bushings failures. Hence it is necessary to monitor the condition of bushings. In this paper three methods are developed to monitor the condition of oil filled bushing. Multi-layer perceptron (MLP), Radial basis function (RBF) and Rough Set (RS) models are developed and combined through majority voting to form a committee. The MLP performs better that the RBF and the RS is terms of classification accuracy. The RBF is the fasted to train. The committee performs better than the individual models. The diversity of models is measured to evaluate their similarity when used in the committee. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 11,780 |
2405.02809 | Does Optimal Control Always Benefit from Better Prediction? An Analysis
Framework for Predictive Optimal Control | The ``prediction + optimal control'' scheme has shown good performance in many applications of automotive, traffic, robot, and building control. In practice, the prediction results are simply considered correct in the optimal control design process. However, in reality, these predictions may never be perfect. Under a conventional stochastic optimal control formulation, it is difficult to answer questions like ``what if the predictions are wrong''. This paper presents an analysis framework for predictive optimal control where the subjective belief about the future is no longer considered perfect. A novel concept called the hidden prediction state is proposed to establish connections among the predictors, the subjective beliefs, the control policies and the objective control performance. Based on this framework, the predictor evaluation problem is analyzed. Three commonly-used predictor evaluation measures, including the mean squared error, the regret and the log-likelihood, are considered. It is shown that neither using the mean square error nor using the likelihood can guarantee a monotonic relationship between the predictor error and the optimal control cost. To guarantee control cost improvement, it is suggested the predictor should be evaluated with the control performance, e.g., using the optimal control cost or the regret to evaluate predictors. Numerical examples and examples from automotive applications with real-world driving data are provided to illustrate the ideas and the results. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 451,930 |
cs/0306006 | Experience with the Open Source based implementation for ATLAS
Conditions Data Management System | Conditions Data in high energy physics experiments is frequently seen as every data needed for reconstruction besides the event data itself. This includes all sorts of slowly evolving data like detector alignment, calibration and robustness, and data from detector control system. Also, every Conditions Data Object is associated with a time interval of validity and a version. Besides that, quite often is useful to tag collections of Conditions Data Objects altogether. These issues have already been investigated and a data model has been proposed and used for different implementations based in commercial DBMSs, both at CERN and for the BaBar experiment. The special case of the ATLAS complex trigger that requires online access to calibration and alignment data poses new challenges that have to be met using a flexible and customizable solution more in the line of Open Source components. Motivated by the ATLAS challenges we have developed an alternative implementation, based in an Open Source RDBMS. Several issues were investigated land will be described in this paper: -The best way to map the conditions data model into the relational database concept considering what are foreseen as the most frequent queries. -The clustering model best suited to address the scalability problem. -Extensive tests were performed and will be described. The very promising results from these tests are attracting the attention from the HEP community and driving further developments. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 537,860 |
1812.10119 | Sequence to Sequence Learning for Query Expansion | Using sequence to sequence algorithms for query expansion has not been explored yet in Information Retrieval literature nor in Question-Answering's. We tried to fill this gap in the literature with a custom Query Expansion engine trained and tested on open datasets. Starting from open datasets, we built a Query Expansion training set using sentence-embeddings-based Keyword Extraction. We therefore assessed the ability of the Sequence to Sequence neural networks to capture expanding relations in the words embeddings' space. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 117,296 |
2105.06284 | Ergodic Capacity of High Throughput Satellite Systems With Mixed FSO-RF
Transmission | We study a high throughput satellite system, where the feeder link uses free-space optical (FSO) and the user link uses radio frequency (RF) communication. In particular, we first propose a transmit diversity using Alamouti space time block coding to mitigate the atmospheric turbulence in the feeder link. Then, based on the concept of average virtual signal-to-interference-plus-noise ratio and one-bit feedback, we propose a beamforming algorithm for the user link to maximize the ergodic capacity (EC). Moreover, by assuming that the FSO links follow the Malaga distribution whereas RF links undergo the shadowed-Rician fading, we derive a closed-form EC expression of the considered system. Finally, numerical simulations validate the accuracy of our theoretical analysis, and show that the proposed schemes can achieve higher capacity compared with the reference schemes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 235,077 |
2103.00167 | Inferring Unobserved Events in Systems With Shared Resources and Queues | To identify the causes of performance problems or to predict process behavior, it is essential to have correct and complete event data. This is particularly important for distributed systems with shared resources, e.g., one case can block another case competing for the same machine, leading to inter-case dependencies in performance. However, due to a variety of reasons, real-life systems often record only a subset of all events taking place. To understand and analyze the behavior and performance of processes with shared resources, we aim to reconstruct bounds for timestamps of events in a case that must have happened but were not recorded by inference over events in other cases in the system. We formulate and solve the problem by systematically introducing multi-entity concepts in event logs and process models. We introduce a partial-order based model of a multi-entity event log and a corresponding compositional model for multi-entity processes. We define PQR-systems as a special class of multi-entity processes with shared resources and queues. We then study the problem of inferring from an incomplete event log unobserved events and their timestamps that are globally consistent with a PQR-system. We solve the problem by reconstructing unobserved traces of resources and queues according to the PQR-model and derive bounds for their timestamps using a linear program. While the problem is illustrated for material handling systems like baggage handling systems in airports, the approach can be applied to other settings where recording is incomplete. The ideas have been implemented in ProM and were evaluated using both synthetic and real-life event logs. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 222,172 |
1107.3199 | Performance Guarantee under Longest-Queue-First Schedule in Wireless
Networks | Efficient link scheduling in a wireless network is challenging. Typical optimal algorithms require solving an NP-hard sub-problem. To meet the challenge, one stream of research focuses on finding simpler sub-optimal algorithms that have low complexity but high efficiency in practice. In this paper, we study the performance guarantee of one such scheduling algorithm, the Longest-Queue-First (LQF) algorithm. It is known that the LQF algorithm achieves the full capacity region, $\Lambda$, when the interference graph satisfies the so-called local pooling condition. For a general graph $G$, LQF achieves (i.e., stabilizes) a part of the capacity region, $\sigma^*(G) \Lambda$, where $\sigma^*(G)$ is the overall local pooling factor of the interference graph $G$ and $\sigma^*(G) \leq 1$. It has been shown later that LQF achieves a larger rate region, $\Sigma^*(G) \Lambda$, where $\Sigma^ (G)$ is a diagonal matrix. The contribution of this paper is to describe three new achievable rate regions, which are larger than the previously-known regions. In particular, the new regions include all the extreme points of the capacity region and are not convex in general. We also discover a counter-intuitive phenomenon in which increasing the arrival rate may sometime help to stabilize the network. This phenomenon can be well explained using the theory developed in the paper. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 11,320 |
2403.08222 | Robust Decision Aggregation with Adversarial Experts | We consider a robust aggregation problem in the presence of both truthful and adversarial experts. The truthful experts will report their private signals truthfully, while the adversarial experts can report arbitrarily. We assume experts are marginally symmetric in the sense that they share the same common prior and marginal posteriors. The rule maker needs to design an aggregator to predict the true world state from these experts' reports, without knowledge of the underlying information structures or adversarial strategies. We aim to find the optimal aggregator that outputs a forecast minimizing regret under the worst information structure and adversarial strategies. The regret is defined by the difference in expected loss between the aggregator and a benchmark who aggregates optimally given the information structure and reports of truthful experts. We focus on binary states and reports. Under L1 loss, we show that the truncated mean aggregator is optimal. When there are at most k adversaries, this aggregator discards the k lowest and highest reported values and averages the remaining ones. For L2 loss, the optimal aggregators are piecewise linear functions. All the optimalities hold when the ratio of adversaries is bounded above by a value determined by the experts' priors and posteriors. The regret only depends on the ratio of adversaries, not on their total number. For hard aggregators that output a decision, we prove that a random version of the truncated mean is optimal for both L1 and L2. This aggregator randomly follows a remaining value after discarding the $k$ lowest and highest reported values. We extend the hard aggregator to multi-state setting. We evaluate our aggregators numerically in an ensemble learning task. We also obtain negative results for general adversarial aggregation problems under broader information structures and report spaces. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 437,239 |
2403.08013 | Supervised Time Series Classification for Anomaly Detection in Subsea
Engineering | Time series classification is of significant importance in monitoring structural systems. In this work, we investigate the use of supervised machine learning classification algorithms on simulated data based on a physical system with two states: Intact and Broken. We provide a comprehensive discussion of the preprocessing of temporal data, using measures of statistical dispersion and dimension reduction techniques. We present an intuitive baseline method and discuss its efficiency. We conclude with a comparison of the various methods based on different performance metrics, showing the advantage of using machine learning techniques as a tool in decision making. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 437,140 |
2004.01770 | Software Engineering For Automated Game Design | As we develop more assistive and automated game design systems, the question of how these systems should be integrated into game development workflows, and how much adaptation may be required, becomes increasingly important. In this paper we explore the impact of software engineering decisions on the ability of an automated game design system to understand a game's codebase, generate new game code, and evaluate its work. We argue that a new approach to software engineering may be required in order for game developers to fully benefit from automated game designers. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 171,003 |
2205.14764 | 6N-DoF Pose Tracking for Tensegrity Robots | Tensegrity robots, which are composed of compressive elements (rods) and flexible tensile elements (e.g., cables), have a variety of advantages, including flexibility, low weight, and resistance to mechanical impact. Nevertheless, the hybrid soft-rigid nature of these robots also complicates the ability to localize and track their state. This work aims to address what has been recognized as a grand challenge in this domain, i.e., the state estimation of tensegrity robots through a markerless, vision-based method, as well as novel, onboard sensors that can measure the length of the robot's cables. In particular, an iterative optimization process is proposed to track the 6-DoF pose of each rigid element of a tensegrity robot from an RGB-D video as well as endcap distance measurements from the cable sensors. To ensure that the pose estimates of rigid elements are physically feasible, i.e., they are not resulting in collisions between rods or with the environment, physical constraints are introduced during the optimization. Real-world experiments are performed with a 3-bar tensegrity robot, which performs locomotion gaits. Given ground truth data from a motion capture system, the proposed method achieves less than 1~cm translation error and 3 degrees rotation error, which significantly outperforms alternatives. At the same time, the approach can provide accurate pose estimation throughout the robot's motion, while motion capture often fails due to occlusions. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 299,480 |
1904.04875 | Non-Lambertian Surface Shape and Reflectance Reconstruction Using
Concentric Multi-Spectral Light Field | Recovering the shape and reflectance of non-Lambertian surfaces remains a challenging problem in computer vision since the view-dependent appearance invalidates traditional photo-consistency constraint. In this paper, we introduce a novel concentric multi-spectral light field (CMSLF) design that is able to recover the shape and reflectance of surfaces with arbitrary material in one shot. Our CMSLF system consists of an array of cameras arranged on concentric circles where each ring captures a specific spectrum. Coupled with a multi-spectral ring light, we are able to sample viewpoint and lighting variations in a single shot via spectral multiplexing. We further show that such concentric camera/light setting results in a unique pattern of specular changes across views that enables robust depth estimation. We formulate a physical-based reflectance model on CMSLF to estimate depth and multi-spectral reflectance map without imposing any surface prior. Extensive synthetic and real experiments show that our method outperforms state-of-the-art light field-based techniques, especially in non-Lambertian scenes. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 127,148 |
2303.03614 | A Fast Insertion Operator for Ridesharing over Time-Dependent Road
Networks | Ridesharing has become a promising travel mode recently due to the economic and social benefits. As an essential operator, "insertion operator" has been extensively studied over static road networks. When a new request appears, the insertion operator is used to find the optimal positions of a worker's current route to insert the origin and destination of this request and minimize the travel time of this worker. Previous works study how to conduct the insertion operation efficiently in static road networks, however, in reality, the route planning should be addressed by considering the dynamic traffic scenario (i.e., a time-dependent road network). Unfortunately, existing solutions to the insertion operator become in efficient under this setting. Thus, this paper studies the insertion operator over time-dependent road networks. Specially, to reduce the high time complexity $O(n^3)$ of existing solution, we calculate the compound travel time functions along the route to speed up the calculation of the travel time between vertex pairs belonging to the route, as a result time complexity of an insertion can be reduced to $O(n^2)$. Finally, we further improve the method to a linear-time insertion algorithm by showing that it only needs $O(1)$ time to find the best position of current route to insert the origin when linearly enumerating each possible position for the new request's destination. Evaluations on two real-world and large-scale datasets show that our methods can accelerate the existing insertion algorithm by up to 25 times. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 349,790 |
2309.15668 | A New Centralized Multi-Node Repair Scheme of MSR codes with
Error-Correcting Capability | Minimum storage regenerating (MSR) codes, with the MDS property and the optimal repair bandwidth, are widely used in distributed storage systems (DSS) for data recovery. In this paper, we consider the construction of $(n,k,l)$ MSR codes in the centralized model that can repair $h$ failed nodes simultaneously with $e$ out $d$ helper nodes providing erroneous information. We first propose the new repair scheme, and give a complete proof of the lower bound on the amount of symbols downloaded from the helped nodes, provided that some of helper nodes provide erroneous information. Then we focus on two explicit constructions with the repair scheme proposed. For $2\leq h\leq n-k$, $k+2e\leq d \leq n-h$ and $d\equiv k+2e \;(\mod{h})$, the first one has the UER $(h, d)$-optimal repair property, and the second one has the UER $(h, d)$-optimal access property. Compared with the original constructions (Ye and Barg, IEEE Tran. Inf. Theory, Vol. 63, April 2017), our constructions have improvements in three aspects: 1) The proposed repair scheme is more feasible than the one-by-one scheme presented by Ye and Barg in a parallel data system; 2) The sub-packetization is reduced from $\left(\operatorname{lcm}(d-k+1, d-k+2,\cdots, d-k+h)\right)^n$ to $\left((d-2e-k+h)/h\right)^n$, which reduces at least by a factor of $(h(d-k+h))^n$; 3) The field size of the first construction is reduced to $|\mathbb{F}| \geq n(d-2e-k+h)/h$, which reduces at least by a factor of $h(d-k+h)$. Small sub-packetization and small field size are preferred in practice due to the limited storage capacity and low computation complexity in the process of encoding, decoding and repairing. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 395,063 |
2105.07464 | Few-NERD: A Few-Shot Named Entity Recognition Dataset | Recently, considerable literature has grown up around the theme of few-shot named entity recognition (NER), but little published benchmark data specifically focused on the practical and challenging task. Current approaches collect existing supervised NER datasets and re-organize them to the few-shot setting for empirical study. These strategies conventionally aim to recognize coarse-grained entity types with few examples, while in practice, most unseen entity types are fine-grained. In this paper, we present Few-NERD, a large-scale human-annotated few-shot NER dataset with a hierarchy of 8 coarse-grained and 66 fine-grained entity types. Few-NERD consists of 188,238 sentences from Wikipedia, 4,601,160 words are included and each is annotated as context or a part of a two-level entity type. To the best of our knowledge, this is the first few-shot NER dataset and the largest human-crafted NER dataset. We construct benchmark tasks with different emphases to comprehensively assess the generalization capability of models. Extensive empirical results and analysis show that Few-NERD is challenging and the problem requires further research. We make Few-NERD public at https://ningding97.github.io/fewnerd/. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 235,439 |
2208.09708 | DenseShift: Towards Accurate and Efficient Low-Bit Power-of-Two
Quantization | Efficiently deploying deep neural networks on low-resource edge devices is challenging due to their ever-increasing resource requirements. To address this issue, researchers have proposed multiplication-free neural networks, such as Power-of-Two quantization, or also known as Shift networks, which aim to reduce memory usage and simplify computation. However, existing low-bit Shift networks are not as accurate as their full-precision counterparts, typically suffering from limited weight range encoding schemes and quantization loss. In this paper, we propose the DenseShift network, which significantly improves the accuracy of Shift networks, achieving competitive performance to full-precision networks for vision and speech applications. In addition, we introduce a method to deploy an efficient DenseShift network using non-quantized floating-point activations, while obtaining 1.6X speed-up over existing methods. To achieve this, we demonstrate that zero-weight values in low-bit Shift networks do not contribute to model capacity and negatively impact inference computation. To address this issue, we propose a zero-free shifting mechanism that simplifies inference and increases model capacity. We further propose a sign-scale decomposition design to enhance training efficiency and a low-variance random initialization strategy to improve the model's transfer learning performance. Our extensive experiments on various computer vision and speech tasks demonstrate that DenseShift outperforms existing low-bit multiplication-free networks and achieves competitive performance compared to full-precision networks. Furthermore, our proposed approach exhibits strong transfer learning performance without a drop in accuracy. Our code was released on GitHub. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 313,807 |
2111.10653 | Real-time Human Detection Model for Edge Devices | Building a small-sized fast surveillance system model to fit on limited resource devices is a challenging, yet an important task. Convolutional Neural Networks (CNNs) have replaced traditional feature extraction and machine learning models in detection and classification tasks. Various complex large CNN models are proposed that achieve significant improvement in the accuracy. Lightweight CNN models have been recently introduced for real-time tasks. This paper suggests a CNN-based lightweight model that can fit on a limited edge device such as Raspberry Pi. Our proposed model provides better performance time, smaller size and comparable accuracy with existing method. The model performance is evaluated on multiple benchmark datasets. It is also compared with existing models in terms of size, average processing time, and F-score. Other enhancements for future research are suggested. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 267,401 |
1207.1765 | Object Recognition with Multi-Scale Pyramidal Pooling Networks | We present a Multi-Scale Pyramidal Pooling Network, featuring a novel pyramidal pooling layer at multiple scales and a novel encoding layer. Thanks to the former the network does not require all images of a given classification task to be of equal size. The encoding layer improves generalisation performance in comparison to similar neural network architectures, especially when training data is scarce. We evaluate and compare our system to convolutional neural networks and state-of-the-art computer vision methods on various benchmark datasets. We also present results on industrial steel defect classification, where existing architectures are not applicable because of the constraint on equally sized input images. The proposed architecture can be seen as a fully supervised hierarchical bag-of-features extension that is trained online and can be fine-tuned for any given task. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | 17,330 |
0812.2301 | Cooperative Hybrid ARQ Protocols: Unified Frameworks for Protocol
Analysis | Cooperative hybrid-ARQ (HARQ) protocols, which can exploit the spatial and temporal diversities, have been widely studied. The efficiency of cooperative HARQ protocols is higher than that of cooperative protocols, because retransmissions are only performed when necessary. We classify cooperative HARQ protocols as three decode-and-forward based HARQ (DF-HARQ) protocols and two amplified-and-forward based (AF-HARQ) protocols. To compare these protocols and obtain the optimum parameters, two unified frameworks are developed for protocol analysis. Using the frameworks, we can evaluate and compare the maximum throughput and outage probabilities according to the SNR, the relay location, and the delay constraint for the protocols. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 2,781 |
2407.19174 | Reducing Spurious Correlation for Federated Domain Generalization | The rapid development of multimedia has provided a large amount of data with different distributions for visual tasks, forming different domains. Federated Learning (FL) can efficiently use this diverse data distributed on different client media in a decentralized manner through model sharing. However, in open-world scenarios, there is a challenge: global models may struggle to predict well on entirely new domain data captured by certain media, which were not encountered during training. Existing methods still rely on strong statistical correlations between samples and labels to address this issue, which can be misleading, as some features may establish spurious short-cut correlations with the predictions. To comprehensively address this challenge, we introduce FedCD (Cross-Domain Invariant Federated Learning), an overall optimization framework at both the local and global levels. We introduce the Spurious Correlation Intervener (SCI), which employs invariance theory to locally generate interventers for features in a self-supervised manner to reduce the model's susceptibility to spurious correlated features. Our approach requires no sharing of data or features, only the gradients related to the model. Additionally, we develop the simple yet effective Risk Extrapolation Aggregation strategy (REA), determining aggregation coefficients through mathematical optimization to facilitate global causal invariant predictions. Extensive experiments and ablation studies highlight the effectiveness of our approach. In both classification and object detection generalization tasks, our method outperforms the baselines by an average of at least 1.45% in Acc, 4.8% and 1.27% in mAP50. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 476,673 |
2211.14311 | Sub-1ms Instinctual Interference Adaptive GaN LNA Front-End with Power
and Linearity Tuning | One of the major challenges in communication, radar, and electronic warfare receivers arises from nearby device interference. The paper presents a 2-6 GHz GaN LNA front-end with onboard sensing, processing, and feedback utilizing microcontroller-based controls to achieve adaptation to a variety of interference scenarios through power and linearity regulations. The utilization of GaN LNA provides high power handling capability (30 dBm) and high linearity (OIP3= 30 dBm) for radar and EW applications. The system permits an LNA power consumption to tune from 500 mW to 2 W (4X increase) in order to adjust the linearity from P\textsubscript{1dB,IN}=-10.5 dBm to 0.5 dBm (>10X increase). Across the tuning range, the noise figure increases by approximately 0.4 dB. Feedback control methods are presented with backgrounds from control theory. The rest of the controls consume $\leq$10$\%$ (100 mW) of nominal LNA power (1 W) to achieve an adaptation time <1 ms. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 332,783 |
2302.08947 | Learning from Label Proportion with Online Pseudo-Label Decision by
Regret Minimization | This paper proposes a novel and efficient method for Learning from Label Proportions (LLP), whose goal is to train a classifier only by using the class label proportions of instance sets, called bags. We propose a novel LLP method based on an online pseudo-labeling method with regret minimization. As opposed to the previous LLP methods, the proposed method effectively works even if the bag sizes are large. We demonstrate the effectiveness of the proposed method using some benchmark datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 346,241 |
2204.04109 | Fast metric embedding into the Hamming cube | We consider the problem of embedding a subset of $\mathbb{R}^n$ into a low-dimensional Hamming cube in an almost isometric way. We construct a simple, data-oblivious, and computationally efficient map that achieves this task with high probability: we first apply a specific structured random matrix, which we call the double circulant matrix; using that matrix requires linear storage and matrix-vector multiplication can be performed in near-linear time. We then binarize each vector by comparing each of its entries to a random threshold, selected uniformly at random from a well-chosen interval. We estimate the number of bits required for this encoding scheme in terms of two natural geometric complexity parameters of the set - its Euclidean covering numbers and its localized Gaussian complexity. The estimate we derive turns out to be the best that one can hope for - up to logarithmic terms. The key to the proof is a phenomenon of independent interest: we show that the double circulant matrix mimics the behavior of a Gaussian matrix in two important ways. First, it maps an arbitrary set in $\mathbb{R}^n$ into a set of well-spread vectors. Second, it yields a fast near-isometric embedding of any finite subset of $\ell_2^n$ into $\ell_1^m$. This embedding achieves the same dimension reduction as a Gaussian matrix in near-linear time, under an optimal condition - up to logarithmic factors - on the number of points to be embedded. This improves a well-known construction due to Ailon and Chazelle. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 290,538 |
2112.08099 | Encoding Individual Source Sequences for the Wiretap Channel | We consider the problem of encoding a deterministic source sequence (a.k.a.\ individual sequence) for the degraded wiretap channel by means of an encoder and decoder that can both be implemented as finite--state machines. Our first main result is a necessary condition for both reliable and secure transmission in terms of the given source sequence, the bandwidth expansion factor, the secrecy capacity, the number of states of the encoder and the number of states of the decoder. Equivalently, this necessary condition can be presented as a converse bound (i.e., a lower bound) on the smallest achievable bandwidth expansion factor. The bound is asymptotically achievable by Lempel-Ziv compression followed by good channel coding for the wiretap channel. Given that the lower bound is saturated, we also derive a lower bound on the minimum necessary rate of purely random bits needed for local randomness at the encoder in order to meet the security constraint. This bound too is achieved by the same achievability scheme. Finally, we extend the main results to the case where the legitimate decoder has access to a side information sequence, which is another individual sequence that may be related to the source sequence, and a noisy version of the side information sequence leaks to the wiretapper. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 271,695 |
2501.08566 | Towards Lightweight and Stable Zero-shot TTS with Self-distilled
Representation Disentanglement | Zero-shot Text-To-Speech (TTS) synthesis shows great promise for personalized voice customization through voice cloning. However, current methods for achieving zero-shot TTS heavily rely on large model scales and extensive training datasets to ensure satisfactory performance and generalizability across various speakers. This raises concerns regarding both deployment costs and data security. In this paper, we present a lightweight and stable zero-shot TTS system. We introduce a novel TTS architecture designed to effectively model linguistic content and various speaker attributes from source speech and prompt speech, respectively. Furthermore, we present a two-stage self-distillation framework that constructs parallel data pairs for effectively disentangling linguistic content and speakers from the perspective of training data. Extensive experiments show that our system exhibits excellent performance and superior stability on the zero-shot TTS tasks. Moreover, it shows markedly superior computational efficiency, with RTFs of 0.13 and 0.012 on the CPU and GPU, respectively. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 524,818 |
1912.10508 | Direct and Indirect Effects -- An Information Theoretic Perspective | Information theoretic (IT) approaches to quantifying causal influences have experienced some popularity in the literature, in both theoretical and applied (e.g. neuroscience and climate science) domains. While these causal measures are desirable in that they are model agnostic and can capture non-linear interactions, they are fundamentally different from common statistical notions of causal influence in that they (1) compare distributions over the effect rather than values of the effect and (2) are defined with respect to random variables representing a cause rather than specific values of a cause. We here present IT measures of direct, indirect, and total causal effects. The proposed measures are unlike existing IT techniques in that they enable measuring causal effects that are defined with respect to specific values of a cause while still offering the flexibility and general applicability of IT techniques. We provide an identifiability result and demonstrate application of the proposed measures in estimating the causal effect of the El Ni\~no-Southern Oscillation on temperature anomalies in the North American Pacific Northwest. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 158,337 |
2405.20772 | Reinforcement Learning for Sociohydrology | In this study, we discuss how reinforcement learning (RL) provides an effective and efficient framework for solving sociohydrology problems. The efficacy of RL for these types of problems is evident because of its ability to update policies in an iterative manner - something that is also foundational to sociohydrology, where we are interested in representing the co-evolution of human-water interactions. We present a simple case study to demonstrate the implementation of RL in a problem of runoff reduction through management decisions related to changes in land-use land-cover (LULC). We then discuss the benefits of RL for these types of problems and share our perspectives on the future research directions in this area. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 459,512 |
1406.1827 | Recursive Neural Networks Can Learn Logical Semantics | Tree-structured recursive neural networks (TreeRNNs) for sentence meaning have been successful for many applications, but it remains an open question whether the fixed-length representations that they learn can support tasks as demanding as logical deduction. We pursue this question by evaluating whether two such models---plain TreeRNNs and tree-structured neural tensor networks (TreeRNTNs)---can correctly learn to identify logical relationships such as entailment and contradiction using these representations. In our first set of experiments, we generate artificial data from a logical grammar and use it to evaluate the models' ability to learn to handle basic relational reasoning, recursive structures, and quantification. We then evaluate the models on the more natural SICK challenge data. Both models perform competitively on the SICK data and generalize well in all three experiments on simulated data, suggesting that they can learn suitable representations for logical inference in natural language. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | 33,677 |
1910.04810 | Variational Path Optimization of Linear Pentapods with a Simple
Singularity Variety | The class of linear pentapods with a simple singularity variety is obtained by imposing architectural restrictions on the design in such a way that the manipulators singularity variety is linear in orientation position variables. It turns out that such simplification leads to crucial computational advantages while maintaining the machines applications in some fundamental industrial tasks such as five axis milling and laser cutting. We assume that a path between a given start and end pose of the end effector is known which is singularity free and within the manipulators workspace. An optimization process of the initial path is proposed in such a way that the parallel robot increases its distance to the singularity loci while the motion is being smoothed. In our case the computation time of the optimization is improved as we are dealing with pentapods having simple singularity varieties allowing a closed form solution for the local exterma of the singularity distance function. Formally this process is called variational path optimization which is the systematic optimization of a path by manipulating its variations of energy and distance to the obstacle which in this case is the singularity variety. In this process some physical limits of the mechanical joints are also taken into account. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 148,865 |
2106.03530 | CAiRE in DialDoc21: Data Augmentation for Information-Seeking Dialogue
System | Information-seeking dialogue systems, including knowledge identification and response generation, aim to respond to users with fluent, coherent, and informative responses based on users' needs, which. To tackle this challenge, we utilize data augmentation methods and several training techniques with the pre-trained language models to learn a general pattern of the task and thus achieve promising performance. In DialDoc21 competition, our system achieved 74.95 F1 score and 60.74 Exact Match score in subtask 1, and 37.72 SacreBLEU score in subtask 2. Empirical analysis is provided to explain the effectiveness of our approaches. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 239,354 |
2404.03704 | Improvement of Performance in Freezing of Gait detection in Parkinsons
Disease using Transformer networks and a single waist worn triaxial
accelerometer | Freezing of gait (FOG) is one of the most incapacitating symptoms in Parkinsons disease, affecting more than 50 percent of patients in advanced stages of the disease. The presence of FOG may lead to falls and a loss of independence with a consequent reduction in the quality of life. Wearable technology and artificial intelligence have been used for automatic FOG detection to optimize monitoring. However, differences between laboratory and daily-life conditions present challenges for the implementation of reliable detection systems. Consequently, improvement of FOG detection methods remains important to provide accurate monitoring mechanisms intended for free-living and real-time use. This paper presents advances in automatic FOG detection using a single body-worn triaxial accelerometer and a novel classification algorithm based on Transformers and convolutional networks. This study was performed with data from 21 patients who manifested FOG episodes while performing activities of daily living in a home setting. Results indicate that the proposed FOG-Transformer can bring a significant improvement in FOG detection using leave-one-subject-out cross-validation (LOSO CV). These results bring opportunities for the implementation of accurate monitoring systems for use in ambulatory or home settings. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 444,374 |
2007.07675 | 3D Polarized Modulation: System Analysis and Performance | In this paper we present a novel modulation technique for dual polarization communication systems, which reduces the error rate compared with the existent schemes. This modulation places the symbols in a 3D constellation, rather than the classic approach of 2D. Adjusting the phase of these symbols depending on the information bits, we are able to increase the bit rate. Hence, the proposed scheme conveys information by selecting both polarization state and the phase of radiated electromagnetic wave. We also analyse the performance of 3D Polarized Modulation (PMod) for different constellation sizes and we obtain a curve of rate adaptation. Finally, we compare the proposed 3D PMod with other existing schemes such as single polarization Phase Shift Keying (PSK) and double polarization Vertical Bell Laboratories Layer Space-Time (V-BLAST), both carrying the same number of information bits. The results show that 3D PMod always outperforms all other schemes, except for low order modulation. Therefore, we can conclude that 3D PMod is an excellent candidate for medium and high modulation order transmissions. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 187,401 |
2106.07030 | The Backpropagation Algorithm Implemented on Spiking Neuromorphic
Hardware | The capabilities of natural neural systems have inspired new generations of machine learning algorithms as well as neuromorphic very large-scale integrated (VLSI) circuits capable of fast, low-power information processing. However, it has been argued that most modern machine learning algorithms are not neurophysiologically plausible. In particular, the workhorse of modern deep learning, the backpropagation algorithm, has proven difficult to translate to neuromorphic hardware. In this study, we present a neuromorphic, spiking backpropagation algorithm based on synfire-gated dynamical information coordination and processing, implemented on Intel's Loihi neuromorphic research processor. We demonstrate a proof-of-principle three-layer circuit that learns to classify digits from the MNIST dataset. To our knowledge, this is the first work to show a Spiking Neural Network (SNN) implementation of the backpropagation algorithm that is fully on-chip, without a computer in the loop. It is competitive in accuracy with off-chip trained SNNs and achieves an energy-delay product suitable for edge computing. This implementation shows a path for using in-memory, massively parallel neuromorphic processors for low-power, low-latency implementation of modern deep learning applications. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | false | false | 240,739 |
2301.06957 | FewSOME: One-Class Few Shot Anomaly Detection with Siamese Networks | Recent Anomaly Detection techniques have progressed the field considerably but at the cost of increasingly complex training pipelines. Such techniques require large amounts of training data, resulting in computationally expensive algorithms that are unsuitable for settings where only a small amount of normal samples are available for training. We propose 'Few Shot anOMaly detection' (FewSOME), a deep One-Class Anomaly Detection algorithm with the ability to accurately detect anomalies having trained on 'few' examples of the normal class and no examples of the anomalous class. We describe FewSOME to be of low complexity given its low data requirement and short training time. FewSOME is aided by pretrained weights with an architecture based on Siamese Networks. By means of an ablation study, we demonstrate how our proposed loss, 'Stop Loss', improves the robustness of FewSOME. Our experiments demonstrate that FewSOME performs at state-of-the-art level on benchmark datasets MNIST, CIFAR-10, F-MNIST and MVTec AD while training on only 30 normal samples, a minute fraction of the data that existing methods are trained on. Moreover, our experiments show FewSOME to be robust to contaminated datasets. We also report F1 score and balanced accuracy in addition to AUC as a benchmark for future techniques to be compared against. Code available; https://github.com/niamhbelton/FewSOME. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 340,791 |
1908.01241 | Robust Max Entrywise Error Bounds for Tensor Estimation from Sparse
Observations via Similarity Based Collaborative Filtering | Consider the task of estimating a 3-order $n \times n \times n$ tensor from noisy observations of randomly chosen entries in the sparse regime. We introduce a similarity based collaborative filtering algorithm for estimating a tensor from sparse observations and argue that it achieves sample complexity that nearly matches the conjectured computationally efficient lower bound on the sample complexity for the setting of low-rank tensors. Our algorithm uses the matrix obtained from the flattened tensor to compute similarity, and estimates the tensor entries using a nearest neighbor estimator. We prove that the algorithm recovers a finite rank tensor with maximum entry-wise error (MEE) and mean-squared-error (MSE) decaying to $0$ as long as each entry is observed independently with probability $p = \Omega(n^{-3/2 + \kappa})$ for any arbitrarily small $\kappa > 0$. More generally, we establish robustness of the estimator, showing that when arbitrary noise bounded by $\varepsilon \geq 0$ is added to each observation, the estimation error with respect to MEE and MSE degrades by $\text{poly}(\varepsilon)$. Consequently, even if the tensor may not have finite rank but can be approximated within $\varepsilon \geq 0$ by a finite rank tensor, then the estimation error converges to $\text{poly}(\varepsilon)$. Our analysis sheds insight into the conjectured sample complexity lower bound, showing that it matches the connectivity threshold of the graph used by our algorithm for estimating similarity between coordinates. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 140,707 |
2012.08131 | Deep Layout of Custom-size Furniture through Multiple-domain Learning | In this paper, we propose a multiple-domain model for producing a custom-size furniture layout in the interior scene. This model is aimed to support professional interior designers to produce interior decoration solutions with custom-size furniture more quickly. The proposed model combines a deep layout module, a domain attention module, a dimensional domain transfer module, and a custom-size module in the end-end training. Compared with the prior work on scene synthesis, our proposed model enhances the ability of auto-layout of custom-size furniture in the interior room. We conduct our experiments on a real-world interior layout dataset that contains $710,700$ designs from professional designers. Our numerical results demonstrate that the proposed model yields higher-quality layouts of custom-size furniture in comparison with the state-of-art model. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 211,670 |
2304.06833 | Estimate-Then-Optimize versus Integrated-Estimation-Optimization versus
Sample Average Approximation: A Stochastic Dominance Perspective | In data-driven stochastic optimization, model parameters of the underlying distribution need to be estimated from data in addition to the optimization task. Recent literature considers integrating the estimation and optimization processes by selecting model parameters that lead to the best empirical objective performance. This integrated approach, which we call integrated-estimation-optimization (IEO), can be readily shown to outperform simple estimate-then-optimize (ETO) when the model is misspecified. In this paper, we show that a reverse behavior appears when the model class is well-specified and there is sufficient data. Specifically, for a general class of nonlinear stochastic optimization problems, we show that simple ETO outperforms IEO asymptotically when the model class covers the ground truth, in the strong sense of stochastic dominance of the regret. Namely, the entire distribution of the regret, not only its mean or other moments, is always better for ETO compared to IEO. Our results also apply to constrained, contextual optimization problems where the decision depends on observed features. Whenever applicable, we also demonstrate how standard sample average approximation (SAA) performs the worst when the model class is well-specified in terms of regret, and best when it is misspecified. Finally, we provide experimental results to support our theoretical comparisons and illustrate when our insights hold in finite-sample regimes and under various degrees of misspecification. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 358,121 |
2307.02055 | Adversarial Attacks on Image Classification Models: FGSM and Patch
Attacks and their Impact | This chapter introduces the concept of adversarial attacks on image classification models built on convolutional neural networks (CNN). CNNs are very popular deep-learning models which are used in image classification tasks. However, very powerful and pre-trained CNN models working very accurately on image datasets for image classification tasks may perform disastrously when the networks are under adversarial attacks. In this work, two very well-known adversarial attacks are discussed and their impact on the performance of image classifiers is analyzed. These two adversarial attacks are the fast gradient sign method (FGSM) and adversarial patch attack. These attacks are launched on three powerful pre-trained image classifier architectures, ResNet-34, GoogleNet, and DenseNet-161. The classification accuracy of the models in the absence and presence of the two attacks are computed on images from the publicly accessible ImageNet dataset. The results are analyzed to evaluate the impact of the attacks on the image classification task. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 377,572 |
2105.09880 | DeepDarts: Modeling Keypoints as Objects for Automatic Scorekeeping in
Darts using a Single Camera | Existing multi-camera solutions for automatic scorekeeping in steel-tip darts are very expensive and thus inaccessible to most players. Motivated to develop a more accessible low-cost solution, we present a new approach to keypoint detection and apply it to predict dart scores from a single image taken from any camera angle. This problem involves detecting multiple keypoints that may be of the same class and positioned in close proximity to one another. The widely adopted framework for regressing keypoints using heatmaps is not well-suited for this task. To address this issue, we instead propose to model keypoints as objects. We develop a deep convolutional neural network around this idea and use it to predict dart locations and dartboard calibration points within an overall pipeline for automatic dart scoring, which we call DeepDarts. Additionally, we propose several task-specific data augmentation strategies to improve the generalization of our method. As a proof of concept, two datasets comprising 16k images originating from two different dartboard setups were manually collected and annotated to evaluate the system. In the primary dataset containing 15k images captured from a face-on view of the dartboard using a smartphone, DeepDarts predicted the total score correctly in 94.7% of the test images. In a second more challenging dataset containing limited training data (830 images) and various camera angles, we utilize transfer learning and extensive data augmentation to achieve a test accuracy of 84.0%. Because DeepDarts relies only on single images, it has the potential to be deployed on edge devices, giving anyone with a smartphone access to an automatic dart scoring system for steel-tip darts. The code and datasets are available. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 236,192 |
2112.05267 | The Many Faces of Anger: A Multicultural Video Dataset of Negative
Emotions in the Wild (MFA-Wild) | The portrayal of negative emotions such as anger can vary widely between cultures and contexts, depending on the acceptability of expressing full-blown emotions rather than suppression to maintain harmony. The majority of emotional datasets collect data under the broad label ``anger", but social signals can range from annoyed, contemptuous, angry, furious, hateful, and more. In this work, we curated the first in-the-wild multicultural video dataset of emotions, and deeply explored anger-related emotional expressions by asking culture-fluent annotators to label the videos with 6 labels and 13 emojis in a multi-label framework. We provide a baseline multi-label classifier on our dataset, and show how emojis can be effectively used as a language-agnostic tool for annotation. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 270,789 |
2111.09074 | Surrogate-Assisted Genetic Algorithm for Wrapper Feature Selection | Feature selection is an intractable problem, therefore practical algorithms often trade off the solution accuracy against the computation time. In this paper, we propose a novel multi-stage feature selection framework utilizing multiple levels of approximations, or surrogates. Such a framework allows for using wrapper approaches in a much more computationally efficient way, significantly increasing the quality of feature selection solutions achievable, especially on large datasets. We design and evaluate a Surrogate-Assisted Genetic Algorithm (SAGA) which utilizes this concept to guide the evolutionary search during the early phase of exploration. SAGA only switches to evaluating the original function at the final exploitation phase. We prove that the run-time upper bound of SAGA surrogate-assisted stage is at worse equal to the wrapper GA, and it scales better for induction algorithms of high order of complexity in number of instances. We demonstrate, using 14 datasets from the UCI ML repository, that in practice SAGA significantly reduces the computation time compared to a baseline wrapper Genetic Algorithm (GA), while converging to solutions of significantly higher accuracy. Our experiments show that SAGA can arrive at near-optimal solutions three times faster than a wrapper GA, on average. We also showcase the importance of evolution control approach designed to prevent surrogates from misleading the evolutionary search towards false optima. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 266,897 |
2306.04848 | Interpreting and Improving Diffusion Models from an Optimization
Perspective | Denoising is intuitively related to projection. Indeed, under the manifold hypothesis, adding random noise is approximately equivalent to orthogonal perturbation. Hence, learning to denoise is approximately learning to project. In this paper, we use this observation to interpret denoising diffusion models as approximate gradient descent applied to the Euclidean distance function. We then provide straight-forward convergence analysis of the DDIM sampler under simple assumptions on the projection error of the denoiser. Finally, we propose a new gradient-estimation sampler, generalizing DDIM using insights from our theoretical results. In as few as 5-10 function evaluations, our sampler achieves state-of-the-art FID scores on pretrained CIFAR-10 and CelebA models and can generate high quality samples on latent diffusion models. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 371,930 |
1909.04305 | Inverse Ising inference from high-temperature re-weighting of
observations | Maximum Likelihood Estimation (MLE) is the bread and butter of system inference for stochastic systems. In some generality, MLE will converge to the correct model in the infinite data limit. In the context of physical approaches to system inference, such as Boltzmann machines, MLE requires the arduous computation of partition functions summing over all configurations, both observed and unobserved. We present here a conceptually and computationally transparent data-driven approach to system inference that is based on the simple question: How should the Boltzmann weights of observed configurations be modified to make the probability distribution of observed configurations close to a flat distribution? This algorithm gives accurate inference by using only observed configurations for systems with a large number of degrees of freedom where other approaches are intractable. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 144,763 |
2103.05985 | Multi-Pretext Attention Network for Few-shot Learning with
Self-supervision | Few-shot learning is an interesting and challenging study, which enables machines to learn from few samples like humans. Existing studies rarely exploit auxiliary information from large amount of unlabeled data. Self-supervised learning is emerged as an efficient method to utilize unlabeled data. Existing self-supervised learning methods always rely on the combination of geometric transformations for the single sample by augmentation, while seriously neglect the endogenous correlation information among different samples that is the same important for the task. In this work, we propose a Graph-driven Clustering (GC), a novel augmentation-free method for self-supervised learning, which does not rely on any auxiliary sample and utilizes the endogenous correlation information among input samples. Besides, we propose Multi-pretext Attention Network (MAN), which exploits a specific attention mechanism to combine the traditional augmentation-relied methods and our GC, adaptively learning their optimized weights to improve the performance and enabling the feature extractor to obtain more universal representations. We evaluate our MAN extensively on miniImageNet and tieredImageNet datasets and the results demonstrate that the proposed method outperforms the state-of-the-art (SOTA) relevant methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 224,155 |
1703.07500 | Can Attackers with Limited Information Exploit Historical Data to Mount
Successful False Data Injection Attacks on Power Systems? | This paper studies physical consequences of unobservable false data injection (FDI) attacks designed only with information inside a sub-network of the power system. The goal of this attack is to overload a chosen target line without being detected via measurements. To overcome the limited information, a multiple linear regression model is developed to learn the relationship between the external network and the attack sub-network from historical data. The worst possible consequences of such FDI attacks are evaluated by solving a bi-level optimization problem wherein the first level models the limited attack resources, while the second level formulates the system response to such attacks via DC optimal power flow (OPF). The attack model with limited information is reflected in the DC OPF formulation that only takes into account the system information for the attack sub-network. The vulnerability of this attack model is illustrated on the IEEE 24-bus RTS and IEEE 118-bus systems. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 70,408 |
2502.01191 | Towards Robust and Reliable Concept Representations:
Reliability-Enhanced Concept Embedding Model | Concept Bottleneck Models (CBMs) aim to enhance interpretability by predicting human-understandable concepts as intermediates for decision-making. However, these models often face challenges in ensuring reliable concept representations, which can propagate to downstream tasks and undermine robustness, especially under distribution shifts. Two inherent issues contribute to concept unreliability: sensitivity to concept-irrelevant features (e.g., background variations) and lack of semantic consistency for the same concept across different samples. To address these limitations, we propose the Reliability-Enhanced Concept Embedding Model (RECEM), which introduces a two-fold strategy: Concept-Level Disentanglement to separate irrelevant features from concept-relevant information and a Concept Mixup mechanism to ensure semantic alignment across samples. These mechanisms work together to improve concept reliability, enabling the model to focus on meaningful object attributes and generate faithful concept representations. Experimental results demonstrate that RECEM consistently outperforms existing baselines across multiple datasets, showing superior performance under background and domain shifts. These findings highlight the effectiveness of disentanglement and alignment strategies in enhancing both reliability and robustness in CBMs. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 529,743 |
2012.09159 | DECOR-GAN: 3D Shape Detailization by Conditional Refinement | We introduce a deep generative network for 3D shape detailization, akin to stylization with the style being geometric details. We address the challenge of creating large varieties of high-resolution and detailed 3D geometry from a small set of exemplars by treating the problem as that of geometric detail transfer. Given a low-resolution coarse voxel shape, our network refines it, via voxel upsampling, into a higher-resolution shape enriched with geometric details. The output shape preserves the overall structure (or content) of the input, while its detail generation is conditioned on an input "style code" corresponding to a detailed exemplar. Our 3D detailization via conditional refinement is realized by a generative adversarial network, coined DECOR-GAN. The network utilizes a 3D CNN generator for upsampling coarse voxels and a 3D PatchGAN discriminator to enforce local patches of the generated model to be similar to those in the training detailed shapes. During testing, a style code is fed into the generator to condition the refinement. We demonstrate that our method can refine a coarse shape into a variety of detailed shapes with different styles. The generated results are evaluated in terms of content preservation, plausibility, and diversity. Comprehensive ablation studies are conducted to validate our network designs. Code is available at https://github.com/czq142857/DECOR-GAN. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 211,980 |
2207.02923 | A Local Optimization Framework for Multi-Objective Ergodic Search | Robots have the potential to perform search for a variety of applications under different scenarios. Our work is motivated by humanitarian assistant and disaster relief (HADR) where often it is critical to find signs of life in the presence of conflicting criteria, objectives, and information. We believe ergodic search can provide a framework for exploiting available information as well as exploring for new information for applications such as HADR, especially when time is of the essence. Ergodic search algorithms plan trajectories such that the time spent in a region is proportional to the amount of information in that region, and is able to naturally balance exploitation (myopically searching high-information areas) and exploration (visiting all locations in the search space for new information). Existing ergodic search algorithms, as well as other information-based approaches, typically consider search using only a single information map. However, in many scenarios, the use of multiple information maps that encode different types of relevant information is common. Ergodic search methods currently do not possess the ability for simultaneous nor do they have a way to balance which information gets priority. This leads us to formulate a Multi-Objective Ergodic Search (MOES) problem, which aims at finding the so-called Pareto-optimal solutions, for the purpose of providing human decision makers various solutions that trade off between conflicting criteria. To efficiently solve MOES, we develop a framework called Sequential Local Ergodic Search (SLES) that converts a MOES problem into a "weight space coverage" problem. It leverages the recent advances in ergodic search methods as well as the idea of local optimization to efficiently approximate the Pareto-optimal front. Our numerical results show that SLES runs distinctly faster than the baseline methods. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 306,658 |
1910.12243 | Solving Optimization Problems through Fully Convolutional Networks: an
Application to the Travelling Salesman Problem | In the new wave of artificial intelligence, deep learning is impacting various industries. As a closely related area, optimization algorithms greatly contribute to the development of deep learning. But the reverse applications are still insufficient. Is there any efficient way to solve certain optimization problem through deep learning? The key is to convert the optimization to a representation suitable for deep learning. In this paper, a traveling salesman problem (TSP) is studied. Considering that deep learning is good at image processing, an image representation method is proposed to transfer a TSP to an image. Based on samples of a 10 city TSP, a fully convolutional network (FCN) is used to learn the mapping from a feasible region to an optimal solution. The training process is analyzed and interpreted through stages. A visualization method is presented to show how a FCN can understand the training task of a TSP. Once the training is completed, no significant effort is required to solve a new TSP and the prediction is obtained on the scale of milliseconds. The results show good performance in finding the global optimal solution. Moreover, the developed FCN model has been demonstrated on TSP's with different city numbers, proving excellent generalization performance. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 151,015 |
1904.12658 | MSDC-Net: Multi-Scale Dense and Contextual Networks for Automated
Disparity Map for Stereo Matching | Disparity prediction from stereo images is essential to computer vision applications including autonomous driving, 3D model reconstruction, and object detection. To predict accurate disparity map, we propose a novel deep learning architecture for detectingthe disparity map from a rectified pair of stereo images, called MSDC-Net. Our MSDC-Net contains two modules: multi-scale fusion 2D convolution and multi-scale residual 3D convolution modules. The multi-scale fusion 2D convolution module exploits the potential multi-scale features, which extracts and fuses the different scale features by Dense-Net. The multi-scale residual 3D convolution module learns the different scale geometry context from the cost volume which aggregated by the multi-scale fusion 2D convolution module. Experimental results on Scene Flow and KITTI datasets demonstrate that our MSDC-Net significantly outperforms other approaches in the non-occluded region. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 129,189 |
2011.11761 | A robust solution of a statistical inverse problem in multiscale
computational mechanics using an artificial neural network | This work addresses the inverse identification of apparent elastic properties of random heterogeneous materials using machine learning based on artificial neural networks. The proposed neural network-based identification method requires the construction of a database from which an artificial neural network can be trained to learn the nonlinear relationship between the hyperparameters of a prior stochastic model of the random compliance field and some relevant quantities of interest of an ad hoc multiscale computational model. An initial database made up with input and target data is first generated from the computational model, from which a processed database is deduced by conditioning the input data with respect to the target data using the nonparametric statistics. Two-and three-layer feedforward artificial neural networks are then trained from each of the initial and processed databases to construct an algebraic representation of the nonlinear mapping between the hyperparameters (network outputs) and the quantities of interest (network inputs). The performances of the trained artificial neural networks are analyzed in terms of mean squared error, linear regression fit and probability distribution between network outputs and targets for both databases. An ad hoc probabilistic model of the input random vector is finally proposed in order to take into account uncertainties on the network input and to perform a robustness analysis of the network output with respect to the input uncertainties level. The capability of the proposed neural network-based identification method to efficiently solve the underlying statistical inverse problem is illustrated through two numerical examples developed within the framework of 2D plane stress linear elasticity, namely a first validation example on synthetic data obtained through computational simulations and a second application example on real experimental data obtained through a physical experiment monitored by digital image correlation on a real heterogeneous biological material (beef cortical bone). | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 207,932 |
2404.02148 | Diffusion$^2$: Dynamic 3D Content Generation via Score Composition of
Video and Multi-view Diffusion Models | Recent advancements in 3D generation are predominantly propelled by improvements in 3D-aware image diffusion models. These models are pretrained on Internet-scale image data and fine-tuned on massive 3D data, offering the capability of producing highly consistent multi-view images. However, due to the scarcity of synchronized multi-view video data, it remains challenging to adapt this paradigm to 4D generation directly. Despite that, the available video and 3D data are adequate for training video and multi-view diffusion models separately that can provide satisfactory dynamic and geometric priors respectively. To take advantage of both, this paper presents Diffusion$^2$, a novel framework for dynamic 3D content creation that reconciles the knowledge about geometric consistency and temporal smoothness from these models to directly sample dense multi-view multi-frame images which can be employed to optimize continuous 4D representation. Specifically, we design a simple yet effective denoising strategy via score composition of pretrained video and multi-view diffusion models based on the probability structure of the target image array. To alleviate the potential conflicts between two heterogeneous scores, we further introduce variance-reducing sampling via interpolated steps, facilitating smooth and stable generation. Owing to the high parallelism of the proposed image generation process and the efficiency of the modern 4D reconstruction pipeline, our framework can generate 4D content within few minutes. Notably, our method circumvents the reliance on expensive and hard-to-scale 4D data, thereby having the potential to benefit from the scaling of the foundation video and multi-view diffusion models. Extensive experiments demonstrate the efficacy of our proposed framework in generating highly seamless and consistent 4D assets under various types of conditions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 443,738 |
1503.07026 | A new model-free design for vehicle control and its validation through
an advanced simulation platform | A new model-free setting and the corresponding "intelligent" P and PD controllers are employed for the longitudinal and lateral motions of a vehicle. This new approach has been developed and used in order to ensure simultaneously a best profile tracking for the longitudinal and lateral behaviors. The longitudinal speed and the derivative of the lateral deviation, on one hand, the driving/braking torque and the steering angle, on the other hand, are respectively the output and the input variables. Let us emphasize that a "good" mathematical modeling, which is quite difficult, if not impossible to obtain, is not needed for such a design. An important part of this publication is focused on the presentation of simulation results with actual and virtual data. The actual data, used in Matlab as reference trajectories, have been obtained from a properly instrumented car (Peugeot 406). Other virtual sets of data have been generated through the interconnected platform SiVIC/RTMaps. It is a dedicated virtual simulation platform for prototyping and validation of advanced driving assistance systems. Keywords- Longitudinal and lateral vehicle control, model-free control, intelligent P controller (i-P controller), algebraic estimation, ADAS (Advanced Driving Assistance Systems). | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 41,430 |
2101.05500 | Joint Dimensionality Reduction for Separable Embedding Estimation | Low-dimensional embeddings for data from disparate sources play critical roles in multi-modal machine learning, multimedia information retrieval, and bioinformatics. In this paper, we propose a supervised dimensionality reduction method that learns linear embeddings jointly for two feature vectors representing data of different modalities or data from distinct types of entities. We also propose an efficient feature selection method that complements, and can be applied prior to, our joint dimensionality reduction method. Assuming that there exist true linear embeddings for these features, our analysis of the error in the learned linear embeddings provides theoretical guarantees that the dimensionality reduction method accurately estimates the true embeddings when certain technical conditions are satisfied and the number of samples is sufficiently large. The derived sample complexity results are echoed by numerical experiments. We apply the proposed dimensionality reduction method to gene-disease association, and predict unknown associations using kernel regression on the dimension-reduced feature vectors. Our approach compares favorably against other dimensionality reduction methods, and against a state-of-the-art method of bilinear regression for predicting gene-disease associations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 215,448 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.