id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1308.5032 | How Did Humans Become So Creative? A Computational Approach | This paper summarizes efforts to computationally model two transitions in the evolution of human creativity: its origins about two million years ago, and the 'big bang' of creativity about 50,000 years ago. Using a computational model of cultural evolution in which neural network based agents evolve ideas for actions through invention and imitation, we tested the hypothesis that human creativity began with onset of the capacity for recursive recall. We compared runs in which agents were limited to single-step actions to runs in which they used recursive recall to chain simple actions into complex ones. Chaining resulted in higher diversity, open-ended novelty, no ceiling on the mean fitness of actions, and greater ability to make use of learning. Using a computational model of portrait painting, we tested the hypothesis that the explosion of creativity in the Middle/Upper Paleolithic was due to onset of con-textual focus: the capacity to shift between associative and analytic thought. This resulted in faster convergence on portraits that resembled the sitter, employed painterly techniques, and were rated as preferable. We conclude that recursive recall and contextual focus provide a computationally plausible explanation of how humans evolved the means to transform this planet. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | true | false | false | 26,590 |
2007.05873 | Reconfigurable Intelligent Surfaces Aided mmWave NOMA: Joint Power
Allocation,Phase Shifts, and Hybrid Beamforming Optimization | In this paper, an reconfigurable intelligent surface (RIS)-aided millimeter wave (mmWave) non-orthogonal multiple access (NOMA) system is considered. In particular, we consider an RIS-aided mmWave-NOMA downlink system with a hybrid beamforming structure. To maximize the achievable sum-rate under a minimum rate constraint for the users and a minimum transmit power constraint, a joint RIS phase shifts, hybrid beamforming, and power allocation problem is formulated. To solve this non-convex optimization problem, we develop an alternating optimization algorithm. Specifically, first, the non-convex problem is transformed into three subproblems, i.e., power allocation, joint phase shifts and analog beamforming optimization, and digital beamforming design. Then, we solve the power allocation problem under fixed phase shifts of the RIS and hybrid beamforming. Finally, given the power allocation matrix, an alternating manifold optimization (AMO)-based method and a successive convex approximation (SCA)-based method are utilized to design the phase shifts, analog beamforming, and transmit beamforming, respectively. Numerical results reveal that the proposed alternating optimization algorithm outperforms state-of-the-art schemes in terms of sum-rate. Moreover, compared to a conventional mmWave-NOMA system without RIS, the proposed RIS-aided mmWave-NOMA system is capable of improving the achievable sum-rate of the system. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 186,812 |
2501.12770 | On Tradeoffs in Learning-Augmented Algorithms | The field of learning-augmented algorithms has gained significant attention in recent years. These algorithms, using potentially inaccurate predictions, must exhibit three key properties: consistency, robustness, and smoothness. In scenarios where distributional information about predictions is available, a strong expected performance is required. Typically, the design of these algorithms involves a natural tradeoff between consistency and robustness, and previous works aimed to achieve Pareto-optimal tradeoffs for specific problems. However, in some settings, this comes at the expense of smoothness. This paper demonstrates that certain problems involve multiple tradeoffs between consistency, robustness, smoothness, and average performance. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 526,431 |
2101.10356 | Deep learning based mixed-dimensional GMM for characterizing variability
in CryoEM | Structural flexibility and/or dynamic interactions with other molecules is a critical aspect of protein function. CryoEM provides direct visualization of individual macromolecules sampling different conformational and compositional states. While numerous methods are available for computational classification of discrete states, characterization of continuous conformational changes or large numbers of discrete state without human supervision remains challenging. Here we present e2gmm, a machine learning algorithm to determine a conformational landscape for proteins or complexes using a 3-D Gaussian mixture model mapped onto 2-D particle images in known orientations. Using a deep neural network architecture, e2gmm can automatically resolve the structural heterogeneity within the protein complex and map particles onto a small latent space describing conformational and compositional changes. This system presents a more intuitive and flexible representation than other manifold methods currently in use. We demonstrate this method on both simulated data as well as three biological systems, to explore compositional and conformational changes at a range of scales. The software is distributed as part of EMAN2. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 216,899 |
2002.12242 | Lipschitz and Comparator-Norm Adaptivity in Online Learning | We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient are constrained. The goal is to simultaneously adapt to both the sequence of gradients and the comparator. We first develop parameter-free and scale-free algorithms for a simplified setting with hints. We present two versions: the first adapts to the squared norms of both comparator and gradients separately using $O(d)$ time per round, the second adapts to their squared inner products (which measure variance only in the comparator direction) in time $O(d^3)$ per round. We then generalize two prior reductions to the unbounded setting; one to not need hints, and a second to deal with the range ratio problem (which already arises in prior work). We discuss their optimality in light of prior and new lower bounds. We apply our methods to obtain sharper regret bounds for scale-invariant online prediction with linear models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 165,968 |
2411.07467 | Machines and Mathematical Mutations: Using GNNs to Characterize Quiver
Mutation Classes | Machine learning is becoming an increasingly valuable tool in mathematics, enabling one to identify subtle patterns across collections of examples so vast that they would be impossible for a single researcher to feasibly review and analyze. In this work, we use graph neural networks to investigate quiver mutation -- an operation that transforms one quiver (or directed multigraph) into another -- which is central to the theory of cluster algebras with deep connections to geometry, topology, and physics. In the study of cluster algebras, the question of mutation equivalence is of fundamental concern: given two quivers, can one efficiently determine if one quiver can be transformed into the other through a sequence of mutations? Currently, this question has only been resolved in specific cases. In this paper, we use graph neural networks and AI explainability techniques to discover mutation equivalence criteria for the previously unknown case of quivers of type $\tilde{D}_n$. Along the way, we also show that even without explicit training to do so, our model captures structure within its hidden representation that allows us to reconstruct known criteria from type $D_n$, adding to the growing evidence that modern machine learning models are capable of learning abstract and general rules from mathematical data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 507,538 |
2211.05337 | Spatiotemporal k-means | Spatiotemporal data is increasingly available due to emerging sensor and data acquisition technologies that track moving objects. Spatiotemporal clustering addresses the need to efficiently discover patterns and trends in moving object behavior without human supervision. One application of interest is the discovery of moving clusters, where clusters have a static identity, but their location and content can change over time. We propose a two phase spatiotemporal clustering method called spatiotemporal k-means (STkM) that is able to analyze the multi-scale relationships within spatiotemporal data. By optimizing an objective function that is unified over space and time, the method can track dynamic clusters at both short and long timescales with minimal parameter tuning and no post-processing. We begin by proposing a theoretical generating model for spatiotemporal data and prove the efficacy of STkM in this setting. We then evaluate STkM on a recently developed collective animal behavior benchmark dataset and show that STkM outperforms baseline methods in the low-data limit, which is a critical regime of consideration in many emerging applications. Finally, we showcase how STkM can be extended to more complex machine learning tasks, particularly unsupervised region of interest detection and tracking in videos. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 329,517 |
1611.02163 | Unrolled Generative Adversarial Networks | We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator's objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 63,518 |
2008.08212 | Joint Channel Assignment and Power Allocation for Multi-UAV
Communication | Unmanned aerial vehicle (UAV) swarm has emerged as a promising novel paradigm to achieve better coverage and higher capacity for future wireless network by exploiting the more favorable line-of-sight (LoS) propagation. To reap the potential gains of UAV swarm, the remote control signal sent by ground control unit (GCU) is essential, whereas the control signal quality are susceptible in practice due to the effect of the adjacent channel interference (ACI) and the external interference (EI) from radiation sources distributed across the region. To tackle these challenges, this paper considers priority-aware resource coordination in a multi-UAV communication system, where multiple UAVs are controlled by a GCU to perform certain tasks with a pre-defined trajectory. Specifically, we maximize the minimum signal-to-interference-plus-noise ratio (SINR) among all the UAVs by jointly optimizing channel assignment and power allocation strategy under stringent resource availability constraints. According to the intensity of ACI, we consider the corresponding problem in two scenarios, i.e., Null-ACI and ACI systems. By virtue of the particular problem structure in Null-ACI case, we first recast the formulation into an equivalent yet more tractable form and obtain the global optimal solution via Hungarian algorithm. For general ACI systems, we develop an efficient iterative algorithm for its solution based on the smooth approximation and alternating optimization methods. Extensive simulation results demonstrate that the proposed algorithms can significantly enhance the minimum SINR among all the UAVs and adapt the allocation of communication resources to diverse mission priority. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 192,348 |
2005.05752 | A Secure Federated Learning Framework for 5G Networks | Federated Learning (FL) has been recently proposed as an emerging paradigm to build machine learning models using distributed training datasets that are locally stored and maintained on different devices in 5G networks while providing privacy preservation for participants. In FL, the central aggregator accumulates local updates uploaded by participants to update a global model. However, there are two critical security threats: poisoning and membership inference attacks. These attacks may be carried out by malicious or unreliable participants, resulting in the construction failure of global models or privacy leakage of FL models. Therefore, it is crucial for FL to develop security means of defense. In this article, we propose a blockchain-based secure FL framework to create smart contracts and prevent malicious or unreliable participants from involving in FL. In doing so, the central aggregator recognizes malicious and unreliable participants by automatically executing smart contracts to defend against poisoning attacks. Further, we use local differential privacy techniques to prevent membership inference attacks. Numerical results suggest that the proposed framework can effectively deter poisoning and membership inference attacks, thereby improving the security of FL in 5G networks. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 176,820 |
2409.20270 | Loose Social-Interaction Recognition in Real-world Therapy Scenarios | The computer vision community has explored dyadic interactions for atomic actions such as pushing, carrying-object, etc. However, with the advancement in deep learning models, there is a need to explore more complex dyadic situations such as loose interactions. These are interactions where two people perform certain atomic activities to complete a global action irrespective of temporal synchronisation and physical engagement, like cooking-together for example. Analysing these types of dyadic-interactions has several useful applications in the medical domain for social-skills development and mental health diagnosis. To achieve this, we propose a novel dual-path architecture to capture the loose interaction between two individuals. Our model learns global abstract features from each stream via a CNNs backbone and fuses them using a new Global-Layer-Attention module based on a cross-attention strategy. We evaluate our model on real-world autism diagnoses such as our Loose-Interaction dataset, and the publicly available Autism dataset for loose interactions. Our network achieves baseline results on the Loose-Interaction and SOTA results on the Autism datasets. Moreover, we study different social interactions by experimenting on a publicly available dataset i.e. NTU-RGB+D (interactive classes from both NTU-60 and NTU-120). We have found that different interactions require different network designs. We also compare a slightly different version of our method by incorporating time information to address tight interactions achieving SOTA results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 493,064 |
2212.10718 | Interpretability and causal discovery of the machine learning models to
predict the production of CBM wells after hydraulic fracturing | Machine learning approaches are widely studied in the production prediction of CBM wells after hydraulic fracturing, but merely used in practice due to the low generalization ability and the lack of interpretability. A novel methodology is proposed in this article to discover the latent causality from observed data, which is aimed at finding an indirect way to interpret the machine learning results. Based on the theory of causal discovery, a causal graph is derived with explicit input, output, treatment and confounding variables. Then, SHAP is employed to analyze the influence of the factors on the production capability, which indirectly interprets the machine learning models. The proposed method can capture the underlying nonlinear relationship between the factors and the output, which remedies the limitation of the traditional machine learning routines based on the correlation analysis of factors. The experiment on the data of CBM shows that the detected relationship between the production and the geological/engineering factors by the presented method, is coincident with the actual physical mechanism. Meanwhile, compared with traditional methods, the interpretable machine learning models have better performance in forecasting production capability, averaging 20% improvement in accuracy. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 337,571 |
2408.04614 | Better Alignment with Instruction Back-and-Forth Translation | We propose a new method, instruction back-and-forth translation, to construct high-quality synthetic data grounded in world knowledge for aligning large language models (LLMs). Given documents from a web corpus, we generate and curate synthetic instructions using the backtranslation approach proposed by Li et al.(2023a), and rewrite the responses to improve their quality further based on the initial documents. Fine-tuning with the resulting (backtranslated instruction, rewritten response) pairs yields higher win rates on AlpacaEval than using other common instruction datasets such as Humpback, ShareGPT, Open Orca, Alpaca-GPT4 and Self-instruct. We also demonstrate that rewriting the responses with an LLM outperforms direct distillation, and the two generated text distributions exhibit significant distinction in embedding space. Further analysis shows that our backtranslated instructions are of higher quality than other sources of synthetic instructions, while our responses are more diverse and complex than those obtained from distillation. Overall we find that instruction back-and-forth translation combines the best of both worlds -- making use of the information diversity and quantity found on the web, while ensuring the quality of the responses which is necessary for effective alignment. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 479,449 |
2201.06888 | Autoencoding Video Latents for Adversarial Video Generation | Given the three dimensional complexity of a video signal, training a robust and diverse GAN based video generative model is onerous due to large stochasticity involved in data space. Learning disentangled representations of the data help to improve robustness and provide control in the sampling process. For video generation, there is a recent progress in this area by considering motion and appearance as orthogonal information and designing architectures that efficiently disentangle them. These approaches rely on handcrafting architectures that impose structural priors on the generator to decompose appearance and motion codes in the latent space. Inspired from the recent advancements in the autoencoder based image generation, we present AVLAE (Adversarial Video Latent AutoEncoder) which is a two stream latent autoencoder where the video distribution is learned by adversarial training. In particular, we propose to autoencode the motion and appearance latent vectors of the video generator in the adversarial setting. We demonstrate that our approach learns to disentangle motion and appearance codes even without the explicit structural composition in the generator. Several experiments with qualitative and quantitative results demonstrate the effectiveness of our method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 275,870 |
2301.11961 | Reduced-Order Autodifferentiable Ensemble Kalman Filters | This paper introduces a computational framework to reconstruct and forecast a partially observed state that evolves according to an unknown or expensive-to-simulate dynamical system. Our reduced-order autodifferentiable ensemble Kalman filters (ROAD-EnKFs) learn a latent low-dimensional surrogate model for the dynamics and a decoder that maps from the latent space to the state space. The learned dynamics and decoder are then used within an ensemble Kalman filter to reconstruct and forecast the state. Numerical experiments show that if the state dynamics exhibit a hidden low-dimensional structure, ROAD-EnKFs achieve higher accuracy at lower computational cost compared to existing methods. If such structure is not expressed in the latent state dynamics, ROAD-EnKFs achieve similar accuracy at lower cost, making them a promising approach for surrogate state reconstruction and forecasting. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 342,329 |
1610.07531 | PhaseMax: Convex Phase Retrieval via Basis Pursuit | We consider the recovery of a (real- or complex-valued) signal from magnitude-only measurements, known as phase retrieval. We formulate phase retrieval as a convex optimization problem, which we call PhaseMax. Unlike other convex methods that use semidefinite relaxation and lift the phase retrieval problem to a higher dimension, PhaseMax is a "non-lifting" relaxation that operates in the original signal dimension. We show that the dual problem to PhaseMax is Basis Pursuit, which implies that phase retrieval can be performed using algorithms initially designed for sparse signal recovery. We develop sharp lower bounds on the success probability of PhaseMax for a broad range of random measurement ensembles, and we analyze the impact of measurement noise on the solution accuracy. We use numerical results to demonstrate the accuracy of our recovery guarantees, and we showcase the efficacy and limits of PhaseMax in practice. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 62,800 |
2309.14092 | From OCEL to DOCEL -- Datasets and Automated Transformation | Object-centric event data represent processes from the point of view of all the involved object types. This perspective has gained interest in recent years as it supports the analysis of processes that previously could not be adequately captured, due to the lack of a clear case notion as well as an increasing amount of output data that needs to be stored. Although publicly available event logs are crucial artifacts for researchers to develop and evaluate novel process mining techniques, the currently available object-centric event logs have limitations in this regard. Specifically, they mainly focus on control-flow and rarely contain objects with attributes that change over time, even though this is not realistic, as the attribute values of objects can be altered during their lifecycle. This paper addresses this gap by providing two means of establishing object-centric datasets with dynamically evolving attributes. First, we provide event log generators, which allow researchers to generate customized, artificial logs with dynamic attributes in the recently proposed DOCEL format. Second, we propose and evaluate an algorithm to convert OCEL logs into DOCEL logs, which involves the detection of event attributes that capture evolving object information and the creation of dynamic attributes from these. Through these contributions, this paper supports the advancement of object-centric process analysis by providing researchers with new means to obtain relevant data to use during the development of new techniques. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 394,467 |
1608.03351 | Uplink-Downlink Duality for Integer-Forcing | Consider a Gaussian multiple-input multiple-output (MIMO) multiple-access channel (MAC) with channel matrix $\mathbf{H}$ and a Gaussian MIMO broadcast channel (BC) with channel matrix $\mathbf{H}^{\mathsf{T}}$. For the MIMO MAC, the integer-forcing architecture consists of first decoding integer-linear combinations of the transmitted codewords, which are then solved for the original messages. For the MIMO BC, the integer-forcing architecture consists of pre-inverting the integer-linear combinations at the transmitter so that each receiver can obtain its desired codeword by decoding an integer-linear combination. In both cases, integer-forcing offers higher achievable rates than zero-forcing while maintaining a similar implementation complexity. This paper establishes an uplink-downlink duality relationship for integer-forcing, i.e., any sum rate that is achievable via integer-forcing on the MIMO MAC can be achieved via integer-forcing on the MIMO BC with the same sum power and vice versa. Using this duality relationship, it is shown that integer-forcing can operate within a constant gap of the MIMO BC sum capacity. Finally, the paper proposes a duality-based iterative algorithm for the non-convex problem of selecting optimal beamforming and equalization vectors, and establishes that it converges to a local optimum. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 59,663 |
2303.02304 | Coupled Multiwavelet Neural Operator Learning for Coupled Partial
Differential Equations | Coupled partial differential equations (PDEs) are key tasks in modeling the complex dynamics of many physical processes. Recently, neural operators have shown the ability to solve PDEs by learning the integral kernel directly in Fourier/Wavelet space, so the difficulty for solving the coupled PDEs depends on dealing with the coupled mappings between the functions. Towards this end, we propose a \textit{coupled multiwavelets neural operator} (CMWNO) learning scheme by decoupling the coupled integral kernels during the multiwavelet decomposition and reconstruction procedures in the Wavelet space. The proposed model achieves significantly higher accuracy compared to previous learning-based solvers in solving the coupled PDEs including Gray-Scott (GS) equations and the non-local mean field game (MFG) problem. According to our experimental results, the proposed model exhibits a $2\times \sim 4\times$ improvement relative $L$2 error compared to the best results from the state-of-the-art models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 349,295 |
1906.01827 | Coresets for Data-efficient Training of Machine Learning Models | Incremental gradient (IG) methods, such as stochastic gradient descent and its variants are commonly used for large scale optimization in machine learning. Despite the sustained effort to make IG methods more data-efficient, it remains an open question how to select a training data subset that can theoretically and practically perform on par with the full dataset. Here we develop CRAIG, a method to select a weighted subset (or coreset) of training data that closely estimates the full gradient by maximizing a submodular function. We prove that applying IG to this subset is guaranteed to converge to the (near)optimal solution with the same convergence rate as that of IG for convex optimization. As a result, CRAIG achieves a speedup that is inversely proportional to the size of the subset. To our knowledge, this is the first rigorous method for data-efficient training of general machine learning models. Our extensive set of experiments show that CRAIG, while achieving practically the same solution, speeds up various IG methods by up to 6x for logistic regression and 3x for training deep neural networks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 133,850 |
2405.13538 | Ultra-Fast Adaptive Track Detection Network | Railway detection is critical for the automation of railway systems. Existing models often prioritize either speed or accuracy, but achieving both remains a challenge. To address the limitations of presetting anchor groups that struggle with varying track proportions from different camera angles, an ultra-fast adaptive track detection network is proposed in this paper. This network comprises a backbone network and two specialized branches (Horizontal Coordinate Locator and Perspective Identifier). The Perspective Identifier selects the suitable anchor group from preset anchor groups, thereby determining the row coordinates of the railway track. Subsequently, the Horizontal Coordinate Locator provides row classification results based on multiple preset anchor groups. Then, utilizing the results from the Perspective Identifier, it generates the column coordinates of the railway track. This network is evaluated on multiple datasets, with the lightweight version achieving an F1 score of 98.68% on the SRail dataset and a detection rate of up to 473 FPS. Compared to the SOTA, the proposed model is competitive in both speed and accuracy. The dataset and code are available at https://github.com/idnihai/UFATD | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 455,976 |
2204.11067 | CORE: Simple and Effective Session-based Recommendation within
Consistent Representation Space | Session-based Recommendation (SBR) refers to the task of predicting the next item based on short-term user behaviors within an anonymous session. However, session embedding learned by a non-linear encoder is usually not in the same representation space as item embeddings, resulting in the inconsistent prediction issue while recommending items. To address this issue, we propose a simple and effective framework named CORE, which can unify the representation space for both the encoding and decoding processes. Firstly, we design a representation-consistent encoder that takes the linear combination of input item embeddings as session embedding, guaranteeing that sessions and items are in the same representation space. Besides, we propose a robust distance measuring method to prevent overfitting of embeddings in the consistent representation space. Extensive experiments conducted on five public real-world datasets demonstrate the effectiveness and efficiency of the proposed method. The code is available at: https://github.com/RUCAIBox/CORE. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 293,022 |
2311.11913 | Deep Calibration of Market Simulations using Neural Density Estimators
and Embedding Networks | The ability to construct a realistic simulator of financial exchanges, including reproducing the dynamics of the limit order book, can give insight into many counterfactual scenarios, such as a flash crash, a margin call, or changes in macroeconomic outlook. In recent years, agent-based models have been developed that reproduce many features of an exchange, as summarised by a set of stylised facts and statistics. However, the ability to calibrate simulators to a specific period of trading remains an open challenge. In this work, we develop a novel approach to the calibration of market simulators by leveraging recent advances in deep learning, specifically using neural density estimators and embedding networks. We demonstrate that our approach is able to correctly identify high probability parameter sets, both when applied to synthetic and historical data, and without reliance on manually selected or weighted ensembles of stylised facts. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 409,132 |
2110.13136 | What Would Jiminy Cricket Do? Towards Agents That Behave Morally | When making everyday decisions, people are guided by their conscience, an internal sense of right and wrong. By contrast, artificial agents are currently not endowed with a moral sense. As a consequence, they may learn to behave immorally when trained on environments that ignore moral concerns, such as violent video games. With the advent of generally capable agents that pretrain on many environments, it will become necessary to mitigate inherited biases from environments that teach immoral behavior. To facilitate the development of agents that avoid causing wanton harm, we introduce Jiminy Cricket, an environment suite of 25 text-based adventure games with thousands of diverse, morally salient scenarios. By annotating every possible game state, the Jiminy Cricket environments robustly evaluate whether agents can act morally while maximizing reward. Using models with commonsense moral knowledge, we create an elementary artificial conscience that assesses and guides agents. In extensive experiments, we find that the artificial conscience approach can steer agents towards moral behavior without sacrificing performance. | false | false | false | false | true | false | true | false | true | false | false | false | false | true | false | false | false | false | 263,079 |
2206.00770 | Winning the 3rd Japan Automotive AI Challenge -- Autonomous Racing with
the Autoware.Auto Open Source Software Stack | The 3rd Japan Automotive AI Challenge was an international online autonomous racing challenge where 164 teams competed in December 2021. This paper outlines the winning strategy to this competition, and the advantages and challenges of using the Autoware.Auto open source autonomous driving platform for multi-agent racing. Our winning approach includes a lane-switching opponent overtaking strategy, a global raceline optimization, and the integration of various tools from Autoware.Auto including a Model-Predictive Controller. We describe the use of perception, planning and control modules for high-speed racing applications and provide experience-based insights on working with Autoware.Auto. While our approach is a rule-based strategy that is suitable for non-interactive opponents, it provides a good reference and benchmark for learning-enabled approaches. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 300,240 |
2202.06458 | Faster hyperspectral image classification based on selective kernel
mechanism using deep convolutional networks | Hyperspectral imagery is rich in spatial and spectral information. Using 3D-CNN can simultaneously acquire features of spatial and spectral dimensions to facilitate classification of features, but hyperspectral image information spectral dimensional information redundancy. The use of continuous 3D-CNN will result in a high amount of parameters, and the computational power requirements of the device are high, and the training takes too long. This letter designed the Faster selective kernel mechanism network (FSKNet), FSKNet can balance this problem. It designs 3D-CNN and 2D-CNN conversion modules, using 3D-CNN to complete feature extraction while reducing the dimensionality of spatial and spectrum. However, such a model is not lightweight enough. In the converted 2D-CNN, a selective kernel mechanism is proposed, which allows each neuron to adjust the receptive field size based on the two-way input information scale. Under the Selective kernel mechanism, it mainly includes two components, se module and variable convolution. Se acquires channel dimensional attention and variable convolution to obtain spatial dimension deformation information of ground objects. The model is more accurate, faster, and less computationally intensive. FSKNet achieves high accuracy on the IN, UP, Salinas, and Botswana data sets with very small parameters. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 280,225 |
2402.16170 | Nonparametric Steady-state Learning for Robust Output Regulation of
Nonlinear Output Feedback Systems | This article addresses the nonadaptive and robust output regulation problem of the general nonlinear output feedback system with error output. The global robust output regulation problem for a class of general output feedback nonlinear systems with an uncertain exosystem and high relative degree can be tackled by constructing a linear generic internal model provided that a continuous nonlinear mapping exists. Leveraging the presented nonadaptive framework facilitates the conversion of the nonlinear robust output regulation problem into a robust nonadaptive stabilization endeavour for the augmented system endowed with Input-to-State Stable dynamics, removing the need for constructing a specific Lyapunov function with positive semidefinite derivatives. To ensure the feasibility of the nonlinear mapping, the approach is extended by incorporating the nonparametric learning framework. Moreover, the introduced nonparametric learning framework provides the ability to learn the dynamics of the steady-state/input behaviour from the signal generated from the internal model only using the output error feedback. As a result, the nonadaptive/nonparametric approach can be advantageous by guaranteeing convergence of the estimation and tracking error even when the underlying controlled system dynamics are complex or poorly understood. The effectiveness of the theoretical results is illustrated for a controlled duffing system and a continuously stirred tank reactor | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 432,454 |
1902.06094 | Differentiable reservoir computing | Much effort has been devoted in the last two decades to characterize the situations in which a reservoir computing system exhibits the so-called echo state (ESP) and fading memory (FMP) properties. These important features amount, in mathematical terms, to the existence and continuity of global reservoir system solutions. That research is complemented in this paper with the characterization of the differentiability of reservoir filters for very general classes of discrete-time deterministic inputs. This constitutes a novel strong contribution to the long line of research on the ESP and the FMP and, in particular, links to existing research on the input-dependence of the ESP. Differentiability has been shown in the literature to be a key feature in the learning of attractors of chaotic dynamical systems. A Volterra-type series representation for reservoir filters with semi-infinite discrete-time inputs is constructed in the analytic case using Taylor's theorem and corresponding approximation bounds are provided. Finally, it is shown as a corollary of these results that any fading memory filter can be uniformly approximated by a finite Volterra series with finite memory. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | true | false | false | 121,681 |
2410.18340 | Thermal Chameleon: Task-Adaptive Tone-mapping for Radiometric
Thermal-Infrared images | Thermal Infrared (TIR) imaging provides robust perception for navigating in challenging outdoor environments but faces issues with poor texture and low image contrast due to its 14/16-bit format. Conventional methods utilize various tone-mapping methods to enhance contrast and photometric consistency of TIR images, however, the choice of tone-mapping is largely dependent on knowing the task and temperature dependent priors to work well. In this paper, we present Thermal Chameleon Network (TCNet), a task-adaptive tone-mapping approach for RAW 14-bit TIR images. Given the same image, TCNet tone-maps different representations of TIR images tailored for each specific task, eliminating the heuristic image rescaling preprocessing and reliance on the extensive prior knowledge of the scene temperature or task-specific characteristics. TCNet exhibits improved generalization performance across object detection and monocular depth estimation, with minimal computational overhead and modular integration to existing architectures for various tasks. Project Page: https://github.com/donkeymouse/ThermalChameleon | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 501,838 |
2305.10634 | Modified Gauss-Newton Algorithms under Noise | Gauss-Newton methods and their stochastic version have been widely used in machine learning and signal processing. Their nonsmooth counterparts, modified Gauss-Newton or prox-linear algorithms, can lead to contrasting outcomes when compared to gradient descent in large-scale statistical settings. We explore the contrasting performance of these two classes of algorithms in theory on a stylized statistical example, and experimentally on learning problems including structured prediction. In theory, we delineate the regime where the quadratic convergence of the modified Gauss-Newton method is active under statistical noise. In the experiments, we underline the versatility of stochastic (sub)-gradient descent to minimize nonsmooth composite objectives. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 365,152 |
1702.01940 | One shot entanglement assisted classical and quantum communication over
noisy quantum channels: A hypothesis testing and convex split approach | Capacity of a quantum channel characterizes the limits of reliable communication through a noisy quantum channel. This fundamental information theoretic question is very well studied specially in the setting of many independent uses of the channel. An important scenario, both from practical and conceptual point of view, is when the channel can be used only once. This is known as the one-shot channel coding problem. We provide a tight characterization of the one-shot entanglement assisted classical capacity of a quantum channel. We arrive at our result by introducing a simple decoding technique which we refer to as position-based decoding. We also consider two other important quantum network scenarios: quantum channel with a jammer and quantum broadcast channel. For these problems, we use the recently introduced convex split technique [Anshu, Devabathini and Jain 2014] in addition to position based decoding. Our approach exhibits that the simultaneous use of these two techniques provides a uniform and conceptually simple framework for designing communication protocols for quantum networks. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 67,898 |
2501.05078 | Analyzing Memorization in Large Language Models through the Lens of
Model Attribution | Large Language Models (LLMs) are prevalent in modern applications but often memorize training data, leading to privacy breaches and copyright issues. Existing research has mainly focused on posthoc analyses, such as extracting memorized content or developing memorization metrics, without exploring the underlying architectural factors that contribute to memorization. In this work, we investigate memorization from an architectural lens by analyzing how attention modules at different layers impact its memorization and generalization performance. Using attribution techniques, we systematically intervene in the LLM architecture by bypassing attention modules at specific blocks while keeping other components like layer normalization and MLP transformations intact. We provide theorems analyzing our intervention mechanism from a mathematical view, bounding the difference in layer outputs with and without our attributions. Our theoretical and empirical analyses reveal that attention modules in deeper transformer blocks are primarily responsible for memorization, whereas earlier blocks are crucial for the models generalization and reasoning capabilities. We validate our findings through comprehensive experiments on different LLM families (Pythia and GPTNeo) and five benchmark datasets. Our insights offer a practical approach to mitigate memorization in LLMs while preserving their performance, contributing to safer and more ethical deployment in real world applications. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 523,455 |
2002.02424 | Reliability Validation of Learning Enabled Vehicle Tracking | This paper studies the reliability of a real-world learning-enabled system, which conducts dynamic vehicle tracking based on a high-resolution wide-area motion imagery input. The system consists of multiple neural network components -- to process the imagery inputs -- and multiple symbolic (Kalman filter) components -- to analyse the processed information for vehicle tracking. It is known that neural networks suffer from adversarial examples, which make them lack robustness. However, it is unclear if and how the adversarial examples over learning components can affect the overall system-level reliability. By integrating a coverage-guided neural network testing tool, DeepConcolic, with the vehicle tracking system, we found that (1) the overall system can be resilient to some adversarial examples thanks to the existence of other components, and (2) the overall system presents an extra level of uncertainty which cannot be determined by analysing the deep learning components only. This research suggests the need for novel verification and validation methods for learning-enabled systems. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 162,918 |
2110.09167 | RKHS-SHAP: Shapley Values for Kernel Methods | Feature attribution for kernel methods is often heuristic and not individualised for each prediction. To address this, we turn to the concept of Shapley values~(SV), a coalition game theoretical framework that has previously been applied to different machine learning model interpretation tasks, such as linear models, tree ensembles and deep networks. By analysing SVs from a functional perspective, we propose \textsc{RKHS-SHAP}, an attribution method for kernel machines that can efficiently compute both \emph{Interventional} and \emph{Observational Shapley values} using kernel mean embeddings of distributions. We show theoretically that our method is robust with respect to local perturbations - a key yet often overlooked desideratum for consistent model interpretation. Further, we propose \emph{Shapley regulariser}, applicable to a general empirical risk minimisation framework, allowing learning while controlling the level of specific feature's contributions to the model. We demonstrate that the Shapley regulariser enables learning which is robust to covariate shift of a given feature and fair learning which controls the SVs of sensitive features. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 261,709 |
2312.02136 | BerfScene: Bev-conditioned Equivariant Radiance Fields for Infinite 3D
Scene Generation | Generating large-scale 3D scenes cannot simply apply existing 3D object synthesis technique since 3D scenes usually hold complex spatial configurations and consist of a number of objects at varying scales. We thus propose a practical and efficient 3D representation that incorporates an equivariant radiance field with the guidance of a bird's-eye view (BEV) map. Concretely, objects of synthesized 3D scenes could be easily manipulated through steering the corresponding BEV maps. Moreover, by adequately incorporating positional encoding and low-pass filters into the generator, the representation becomes equivariant to the given BEV map. Such equivariance allows us to produce large-scale, even infinite-scale, 3D scenes via synthesizing local scenes and then stitching them with smooth consistency. Extensive experiments on 3D scene datasets demonstrate the effectiveness of our approach. Our project website is at https://zqh0253.github.io/BerfScene/. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 412,706 |
1401.3846 | Fast Set Bounds Propagation Using a BDD-SAT Hybrid | Binary Decision Diagram (BDD) based set bounds propagation is a powerful approach to solving set-constraint satisfaction problems. However, prior BDD based techniques in- cur the significant overhead of constructing and manipulating graphs during search. We present a set-constraint solver which combines BDD-based set-bounds propagators with the learning abilities of a modern SAT solver. Together with a number of improvements beyond the basic algorithm, this solver is highly competitive with existing propagation based set constraint solvers. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 29,962 |
2407.00830 | DroBoost: An Intelligent Score and Model Boosting Method for Drone
Detection | Drone detection is a challenging object detection task where visibility conditions and quality of the images may be unfavorable, and detections might become difficult due to complex backgrounds, small visible objects, and hard to distinguish objects. Both provide high confidence for drone detections, and eliminating false detections requires efficient algorithms and approaches. Our previous work, which uses YOLOv5, uses both real and synthetic data and a Kalman-based tracker to track the detections and increase their confidence using temporal information. Our current work improves on the previous approach by combining several improvements. We used a more diverse dataset combining multiple sources and combined with synthetic samples chosen from a large synthetic dataset based on the error analysis of the base model. Also, to obtain more resilient confidence scores for objects, we introduced a classification component that discriminates whether the object is a drone or not. Finally, we developed a more advanced scoring algorithm for object tracking that we use to adjust localization confidence. Furthermore, the proposed technique won 1st Place in the Drone vs. Bird Challenge (Workshop on Small-Drone Surveillance, Detection and Counteraction Techniques at ICIAP 2021). | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 469,025 |
2501.06897 | ActiveGAMER: Active GAussian Mapping through Efficient Rendering | We introduce ActiveGAMER, an active mapping system that utilizes 3D Gaussian Splatting (3DGS) to achieve high-quality, real-time scene mapping and exploration. Unlike traditional NeRF-based methods, which are computationally demanding and restrict active mapping performance, our approach leverages the efficient rendering capabilities of 3DGS, allowing effective and efficient exploration in complex environments. The core of our system is a rendering-based information gain module that dynamically identifies the most informative viewpoints for next-best-view planning, enhancing both geometric and photometric reconstruction accuracy. ActiveGAMER also integrates a carefully balanced framework, combining coarse-to-fine exploration, post-refinement, and a global-local keyframe selection strategy to maximize reconstruction completeness and fidelity. Our system autonomously explores and reconstructs environments with state-of-the-art geometric and photometric accuracy and completeness, significantly surpassing existing approaches in both aspects. Extensive evaluations on benchmark datasets such as Replica and MP3D highlight ActiveGAMER's effectiveness in active mapping tasks. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 524,174 |
2412.11937 | Precise Length Control in Large Language Models | Large Language Models (LLMs) are increasingly used in production systems, powering applications such as chatbots, summarization, and question answering. Despite their success, controlling the length of their response remains a significant challenge, particularly for tasks requiring structured outputs or specific levels of detail. In this work, we propose a method to adapt pre-trained decoder-only LLMs for precise control of response length. Our approach incorporates a secondary length-difference positional encoding (LDPE) into the input embeddings, which counts down to a user-set response termination length. Fine-tuning with LDPE allows the model to learn to terminate responses coherently at the desired length, achieving mean token errors of less than 3 tokens. We also introduce Max New Tokens++, an extension that enables flexible upper-bound length control, rather than an exact target. Experimental results on tasks such as question answering and document summarization demonstrate that our method enables precise length control without compromising response quality. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 517,646 |
2104.09091 | Strategies for Democratization of Supercomputing: Availability,
Accessibility and Usability of High Performance Computing for Education and
Practice of Big Data Analytics | There has been an increasing interest in and growing need for high performance computing (HPC), popularly known as supercomputing, in domains such as textual analytics, business domains analytics, forecasting and natural language processing (NLP), in addition to the relatively mature supercomputing domains of quantum physics and biology. HPC has been widely used in computer science (CS) and other traditionally computation intensive disciplines, but has remained largely siloed away from the vast array of social, behavioral, business and economics disciplines. However, with ubiquitous big data, there is a compelling need to make HPC technologically and economically accessible, easy to use, and operationally democratized. Therefore, this research focuses on making two key contributions, the first is the articulation of strategies based on availability, accessibility and usability for the demystification and democratization of HPC, based on an analytical review of Caliburn, a notable supercomputer at its inception. The second contribution is a set of principles for HPC adoption based on an experiential narrative of HPC usage for textual analytics and NLP of social media data from a first time user perspective. Both, the HPC usage process and the output of the early stage analytics are summarized. This research study synthesizes expert input on HPC democratization strategies, and chronicles the challenges and opportunities from a multidisciplinary perspective, of a case of rapid adoption of supercomputing for textual analytics and NLP. Deductive logic is used to identify strategies which can lead to efficacious engagement, adoption, production and sustained usage for research, teaching, application and innovation by researchers, faculty, professionals and students across a broad range of disciplines. | true | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | true | 231,113 |
1010.5954 | Random Graphs for Performance Evaluation of Recommender Systems | The purpose of this article is to introduce a new analytical framework dedicated to measuring performance of recommender systems. The standard approach is to assess the quality of a system by means of accuracy related statistics. However, the specificity of the environments in which recommender systems are deployed requires to pay much attention to speed and memory requirements of the algorithms. Unfortunately, it is implausible to assess accurately the complexity of various algorithms with formal tools. This can be attributed to the fact that such analyses are usually based on an assumption of dense representation of underlying data structures. Whereas, in real life the algorithms operate on sparse data and are implemented with collections dedicated for them. Therefore, we propose to measure the complexity of recommender systems with artificial datasets that posses real-life properties. We utilize recently developed bipartite graph generator to evaluate how state-of-the-art recommender systems' behavior is determined and diversified by topological properties of the generated datasets. | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 8,059 |
1102.2739 | A General Framework for Development of the Cortex-like Visual Object
Recognition System: Waves of Spikes, Predictive Coding and Universal
Dictionary of Features | This study is focused on the development of the cortex-like visual object recognition system. We propose a general framework, which consists of three hierarchical levels (modules). These modules functionally correspond to the V1, V4 and IT areas. Both bottom-up and top-down connections between the hierarchical levels V4 and IT are employed. The higher the degree of matching between the input and the preferred stimulus, the shorter the response time of the neuron. Therefore information about a single stimulus is distributed in time and is transmitted by the waves of spikes. The reciprocal connections and waves of spikes implement predictive coding: an initial hypothesis is generated on the basis of information delivered by the first wave of spikes and is tested with the information carried by the consecutive waves. The development is considered as extraction and accumulation of features in V4 and objects in IT. Once stored a feature can be disposed, if rarely activated. This cause update of feature repository. Consequently, objects in IT are also updated. This illustrates the growing process and dynamical change of topological structures of V4, IT and connections between these areas. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | true | false | false | 9,163 |
2304.12031 | D2NT: A High-Performing Depth-to-Normal Translator | Surface normal holds significant importance in visual environmental perception, serving as a source of rich geometric information. However, the state-of-the-art (SoTA) surface normal estimators (SNEs) generally suffer from an unsatisfactory trade-off between efficiency and accuracy. To resolve this dilemma, this paper first presents a superfast depth-to-normal translator (D2NT), which can directly translate depth images into surface normal maps without calculating 3D coordinates. We then propose a discontinuity-aware gradient (DAG) filter, which adaptively generates gradient convolution kernels to improve depth gradient estimation. Finally, we propose a surface normal refinement module that can easily be integrated into any depth-to-normal SNEs, substantially improving the surface normal estimation accuracy. Our proposed algorithm demonstrates the best accuracy among all other existing real-time SNEs and achieves the SoTA trade-off between efficiency and accuracy. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 360,061 |
2011.00865 | WRSE -- a non-parametric weighted-resolution ensemble for predicting
individual survival distributions in the ICU | Dynamic assessment of mortality risk in the intensive care unit (ICU) can be used to stratify patients, inform about treatment effectiveness or serve as part of an early-warning system. Static risk scoring systems, such as APACHE or SAPS, have recently been supplemented with data-driven approaches that track the dynamic mortality risk over time. Recent works have focused on enhancing the information delivered to clinicians even further by producing full survival distributions instead of point predictions or fixed horizon risks. In this work, we propose a non-parametric ensemble model, Weighted Resolution Survival Ensemble (WRSE), tailored to estimate such dynamic individual survival distributions. Inspired by the simplicity and robustness of ensemble methods, the proposed approach combines a set of binary classifiers spaced according to a decay function reflecting the relevance of short-term mortality predictions. Models and baselines are evaluated under weighted calibration and discrimination metrics for individual survival distributions which closely reflect the utility of a model in ICU practice. We show competitive results with state-of-the-art probabilistic models, while greatly reducing training time by factors of 2-9x. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 204,406 |
2401.05447 | Can ChatGPT Compute Trustworthy Sentiment Scores from Bloomberg Market
Wraps? | We used a dataset of daily Bloomberg Financial Market Summaries from 2010 to 2023, reposted on large financial media, to determine how global news headlines may affect stock market movements using ChatGPT and a two-stage prompt approach. We document a statistically significant positive correlation between the sentiment score and future equity market returns over short to medium term, which reverts to a negative correlation over longer horizons. Validation of this correlation pattern across multiple equity markets indicates its robustness across equity regions and resilience to non-linearity, evidenced by comparison of Pearson and Spearman correlations. Finally, we provide an estimate of the optimal horizon that strikes a balance between reactivity to new information and correlation. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 420,795 |
2205.04035 | Visualization of Decision Trees based on General Line Coordinates to
Support Explainable Models | Visualization of Machine Learning (ML) models is an important part of the ML process to enhance the interpretability and prediction accuracy of the ML models. This paper proposes a new method SPC-DT to visualize the Decision Tree (DT) as interpretable models. These methods use a version of General Line Coordinates called Shifted Paired Coordinates (SPC). In SPC, each n-D point is visualized in a set of shifted pairs of 2-D Cartesian coordinates as a directed graph. The new method expands and complements the capabilities of existing methods, to visualize DT models. It shows: (1) relations between attributes, (2) individual cases relative to the DT structure, (3) data flow in the DT, (4) how tight each split is to thresholds in the DT nodes, and (5) the density of cases in parts of the n-D space. This information is important for domain experts for evaluating and improving the DT models, including avoiding overgeneralization and overfitting of models, along with their performance. The benefits of the methods are demonstrated in the case studies, using three real datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 295,512 |
2307.14731 | Bi-level Network Design for UAM Vertiport Allocation Using
Activity-Based Transport Simulations | The design or the optimization of transport systems is a difficult task. This is especially true in the case of the introduction of new transport modes in an existing system. The main reason is, that even small additions and changes result in the emergence of new travel patterns, likely resulting in an adaptation of the travel behavior of multiple other agents in the system. Here we consider the optimization of future Urban Air Mobility services under consideration of effects induced by the new mode to an existing system. We tackle this problem through a bi-level network design approach, in which the discrete decisions of the network design planner are optimized based on the evaluated dynamic demand of the user's mode choices. We solve the activity-based network design problem (AB-NDP) using a Genetic Algorithm on a multi-objective optimization problem while evaluating the dynamic demand with the large-scale Multi-Agent Transport Simulation (MATSim) framework. The proposed bi-level approach is compared against the results of a coverage approach using a static demand method. The bi-level study shows better results for expected UAM demand and total travel time savings across the transportation system. Due to its generic character, the demonstrated utilization of a bi-level method is applicable to other mobility service design questions and to other regions. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 382,033 |
2502.00677 | LLM-based event log analysis techniques: A survey | Event log analysis is an important task that security professionals undertake. Event logs record key information on activities that occur on computing devices, and due to the substantial number of events generated, they consume a large amount of time and resources to analyse. This demanding and repetitive task is also prone to errors. To address these concerns, researchers have developed automated techniques to improve the event log analysis process. Large Language Models (LLMs) have recently demonstrated the ability to successfully perform a wide range of tasks that individuals would usually partake in, to high standards, and at a pace and degree of complexity that outperform humans. Due to this, researchers are rapidly investigating the use of LLMs for event log analysis. This includes fine-tuning, Retrieval-Augmented Generation (RAG) and in-context learning, which affect performance. These works demonstrate good progress, yet there is a need to understand the developing body of knowledge, identify commonalities between works, and identify key challenges and potential solutions to further developments in this domain. This paper aims to survey LLM-based event log analysis techniques, providing readers with an in-depth overview of the domain, gaps identified in previous research, and concluding with potential avenues to explore in future. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 529,492 |
1808.00262 | Saliency for Fine-grained Object Recognition in Domains with Scarce
Training Data | This paper investigates the role of saliency to improve the classification accuracy of a Convolutional Neural Network (CNN) for the case when scarce training data is available. Our approach consists in adding a saliency branch to an existing CNN architecture which is used to modulate the standard bottom-up visual features from the original image input, acting as an attentional mechanism that guides the feature extraction process. The main aim of the proposed approach is to enable the effective training of a fine-grained recognition model with limited training samples and to improve the performance on the task, thereby alleviating the need to annotate large dataset. % The vast majority of saliency methods are evaluated on their ability to generate saliency maps, and not on their functionality in a complete vision pipeline. Our proposed pipeline allows to evaluate saliency methods for the high-level task of object recognition. We perform extensive experiments on various fine-grained datasets (Flowers, Birds, Cars, and Dogs) under different conditions and show that saliency can considerably improve the network's performance, especially for the case of scarce training data. Furthermore, our experiments show that saliency methods that obtain improved saliency maps (as measured by traditional saliency benchmarks) also translate to saliency methods that yield improved performance gains when applied in an object recognition pipeline. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 104,350 |
1412.3022 | Fast Product-Matrix Regenerating Codes | Distributed storage systems support failures of individual devices by the use of replication or erasure correcting codes. While erasure correcting codes offer a better storage efficiency than replication for similar fault tolerance, they incur higher CPU consumption, higher network consumption and higher disk I/Os. To address these issues, codes specific to storage systems have been designed. Their main feature is the ability to repair a single lost disk efficiently. In this paper, we focus on one such class of codes that minimize network consumption during repair, namely regenerating codes. We implement the original Product-Matrix Regenerating codes as well as a new optimization we propose and show that the resulting optimized codes allow achieving 790 MB/s for encoding in typical settings. Reported speeds are significantly higher than previous studies, highlighting that regenerating codes can be used with little CPU penalty. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 38,252 |
2006.01709 | Compressive Subspace Learning with Antenna Cross-correlations for
Wideband Spectrum Sensing | Compressive subspace learning (CSL) with the exploitation of space diversity has found a potential performance improvement for wideband spectrum sensing (WBSS). However, previous works mainly focus on either exploiting antenna auto-correlations or adopting a multiple-input multiple-output (MIMO) channel without considering the spatial correlations, which will degrade their performances. In this paper, we consider a spatially correlated MIMO channel and propose two CSL algorithms (i.e., mCSLSACC and vCSLACC) which exploit antenna cross-correlations, where the mCSLSACC utilizes an antenna averaging temporal decomposition, and the vCSLACC uses a spatial-temporal joint decomposition. For both algorithms, the conditions of statistical covariance matrices (SCMs) without noise corruption are derived. Through establishing the singular value relation of SCMs in statistical sense between the proposed and traditional CSL algorithms, we show the superiority of the proposed CSL algorithms. By further depicting the receiving correlation matrix of MIMO channel with the exponential correlation model, we give important closed-form expressions for the proposed CSL algorithms in terms of the amplification of singular values over traditional CSL algorithms. Such expressions provide a possibility to determine optimal algorithm parameters for high system performances in an analytical way. Simulations validate the correctness of this work and its performance improvement over existing works in terms of WBSS performance. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 179,846 |
0903.4014 | Construction of Codes for Wiretap Channel and Secret Key Agreement from
Correlated Source Outputs by Using Sparse Matrices | The aim of this paper is to prove coding theorems for the wiretap channel coding problem and secret key agreement problem based on the the notion of a hash property for an ensemble of functions. These theorems imply that codes using sparse matrices can achieve the optimal rate. Furthermore, fixed-rate universal coding theorems for a wiretap channel and a secret key agreement are also proved. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 3,401 |
1701.06452 | Learning what to look in chest X-rays with a recurrent visual attention
model | X-rays are commonly performed imaging tests that use small amounts of radiation to produce pictures of the organs, tissues, and bones of the body. X-rays of the chest are used to detect abnormalities or diseases of the airways, blood vessels, bones, heart, and lungs. In this work we present a stochastic attention-based model that is capable of learning what regions within a chest X-ray scan should be visually explored in order to conclude that the scan contains a specific radiological abnormality. The proposed model is a recurrent neural network (RNN) that learns to sequentially sample the entire X-ray and focus only on informative areas that are likely to contain the relevant information. We report on experiments carried out with more than $100,000$ X-rays containing enlarged hearts or medical devices. The model has been trained using reinforcement learning methods to learn task-specific policies. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 67,138 |
2107.07277 | Passivity-based Decentralized Control for Discrete-time Large-scale
Systems | Passivity theory has recently contributed to developing decentralized control schemes for large-scale systems. Many decentralized passivity-based control schemes are designed in continuous-time. It is well-known, however, that the passivity properties of continuous-time systems may be lost under discretization. In this work, we present a novel stabilizing decentralized control scheme by ensuring passivity for discrete-time systems directly and thus avoiding the issue of passivity preservation. The controller is synthesized by locally solving a semidefinite program offline for each subsystem in a decentralized fashion. This program comprises local conditions ensuring that the corresponding subsystem is locally passive. Passivity is ensured with respect to a local virtual output which is different from the local actual output. The program also comprises local conditions ensuring that the local passivity of all subsystems implies the asymptotic stability of the whole system. The performance of the proposed controller is evaluated on a case study in DC microgrids. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 246,370 |
1905.05894 | Online Normalization for Training Neural Networks | Online Normalization is a new technique for normalizing the hidden activations of a neural network. Like Batch Normalization, it normalizes the sample dimension. While Online Normalization does not use batches, it is as accurate as Batch Normalization. We resolve a theoretical limitation of Batch Normalization by introducing an unbiased technique for computing the gradient of normalized activations. Online Normalization works with automatic differentiation by adding statistical normalization as a primitive. This technique can be used in cases not covered by some other normalizers, such as recurrent networks, fully connected networks, and networks with activation memory requirements prohibitive for batching. We show its applications to image classification, image segmentation, and language modeling. We present formal proofs and experimental results on ImageNet, CIFAR, and PTB datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 130,844 |
2411.14572 | Towards Knowledge Checking in Retrieval-augmented Generation: A
Representation Perspective | Retrieval-Augmented Generation (RAG) systems have shown promise in enhancing the performance of Large Language Models (LLMs). However, these systems face challenges in effectively integrating external knowledge with the LLM's internal knowledge, often leading to issues with misleading or unhelpful information. This work aims to provide a systematic study on knowledge checking in RAG systems. We conduct a comprehensive analysis of LLM representation behaviors and demonstrate the significance of using representations in knowledge checking. Motivated by the findings, we further develop representation-based classifiers for knowledge filtering. We show substantial improvements in RAG performance, even when dealing with noisy knowledge databases. Our study provides new insights into leveraging LLM representations for enhancing the reliability and effectiveness of RAG systems. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 510,236 |
1910.02219 | Recurrent neural network based decision support system | Decision Support Systems (DSS) in complex installations play a crucial role in assisting operators in decision making during abnormal transients and process disturbances, by actively displaying the status of the system and recording events, time of occurrence and suggesting relevant actions. The complexity and dynamics of complex systems require a careful selection of suitable neural network architecture, so as to improve diagnostic accuracy. In this work, we present a technique to develop a fault diagnostic decision support using recurrent neural network and Principal Component Analysis (PCA). We utilized the PCA method for noise filtering in the pre-diagnostic stage, and evaluate the predictive capability of radial basis recurrent network on a representative data derived from the simulation of a pressurized nuclear reactor. The process was validated using data from different fault scenarios, and the fault signatures were used as the input. The predictive outputs required are the location and sizes of the faults. The result shows that the radial basis network gives accurate predictions. Selected hyperparameters and diagnostic results are also presented in this paper. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | true | false | false | 148,175 |
2003.10838 | Prob2Vec: Mathematical Semantic Embedding for Problem Retrieval in
Adaptive Tutoring | We propose a new application of embedding techniques for problem retrieval in adaptive tutoring. The objective is to retrieve problems whose mathematical concepts are similar. There are two challenges: First, like sentences, problems helpful to tutoring are never exactly the same in terms of the underlying concepts. Instead, good problems mix concepts in innovative ways, while still displaying continuity in their relationships. Second, it is difficult for humans to determine a similarity score that is consistent across a large enough training set. We propose a hierarchical problem embedding algorithm, called Prob2Vec, that consists of abstraction and embedding steps. Prob2Vec achieves 96.88\% accuracy on a problem similarity test, in contrast to 75\% from directly applying state-of-the-art sentence embedding methods. It is interesting that Prob2Vec is able to distinguish very fine-grained differences among problems, an ability humans need time and effort to acquire. In addition, the sub-problem of concept labeling with imbalanced training data set is interesting in its own right. It is a multi-label problem suffering from dimensionality explosion, which we propose ways to ameliorate. We propose the novel negative pre-training algorithm that dramatically reduces false negative and positive ratios for classification, using an imbalanced training data set. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 169,450 |
1401.6126 | Delegating Custom Object Detection Tasks to a Universal Classification
System | In this paper, a concept of multipurpose object detection system, recently introduced in our previous work, is clarified. The business aspect of this method is transformation of a classifier into an object detector/locator via an image grid. This is a universal framework for locating objects of interest through classification. The framework standardizes and simplifies implementation of custom systems by doing only a custom analysis of the classification results on the image grid. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 30,304 |
2002.10101 | GRET: Global Representation Enhanced Transformer | Transformer, based on the encoder-decoder framework, has achieved state-of-the-art performance on several natural language generation tasks. The encoder maps the words in the input sentence into a sequence of hidden states, which are then fed into the decoder to generate the output sentence. These hidden states usually correspond to the input words and focus on capturing local information. However, the global (sentence level) information is seldom explored, leaving room for the improvement of generation quality. In this paper, we propose a novel global representation enhanced Transformer (GRET) to explicitly model global representation in the Transformer network. Specifically, in the proposed model, an external state is generated for the global representation from the encoder. The global representation is then fused into the decoder during the decoding process to improve generation quality. We conduct experiments in two text generation tasks: machine translation and text summarization. Experimental results on four WMT machine translation tasks and LCSTS text summarization task demonstrate the effectiveness of the proposed approach on natural language generation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 165,285 |
1806.08514 | Virtual Codec Supervised Re-Sampling Network for Image Compression | In this paper, we propose an image re-sampling compression method by learning virtual codec network (VCN) to resolve the non-differentiable problem of quantization function for image compression. Here, the image re-sampling not only refers to image full-resolution re-sampling but also low-resolution re-sampling. We generalize this method for standard-compliant image compression (SCIC) framework and deep neural networks based compression (DNNC) framework. Specifically, an input image is measured by re-sampling network (RSN) network to get re-sampled vectors. Then, these vectors are directly quantized in the feature space in SCIC, or discrete cosine transform coefficients of these vectors are quantized to further improve coding efficiency in DNNC. At the encoder, the quantized vectors or coefficients are losslessly compressed by arithmetic coding. At the receiver, the decoded vectors are utilized to restore input image by image decoder network (IDN). In order to train RSN network and IDN network together in an end-to-end fashion, our VCN network intimates projection from the re-sampled vectors to the IDN-decoded image. As a result, gradients from IDN network to RSN network can be approximated by VCN network's gradient. Because dimension reduction can be further achieved by quantization in some dimensional space after image re-sampling within auto-encoder architecture, we can well initialize our networks from pre-trained auto-encoder networks. Through extensive experiments and analysis, it is verified that the proposed method has more effectiveness and versatility than many state-of-the-art approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 101,166 |
2412.08388 | LOMA: Language-assisted Semantic Occupancy Network via Triplane Mamba | Vision-based 3D occupancy prediction has become a popular research task due to its versatility and affordability. Nowadays, conventional methods usually project the image-based vision features to 3D space and learn the geometric information through the attention mechanism, enabling the 3D semantic occupancy prediction. However, these works usually face two main challenges: 1) Limited geometric information. Due to the lack of geometric information in the image itself, it is challenging to directly predict 3D space information, especially in large-scale outdoor scenes. 2) Local restricted interaction. Due to the quadratic complexity of the attention mechanism, they often use modified local attention to fuse features, resulting in a restricted fusion. To address these problems, in this paper, we propose a language-assisted 3D semantic occupancy prediction network, named LOMA. In the proposed vision-language framework, we first introduce a VL-aware Scene Generator (VSG) module to generate the 3D language feature of the scene. By leveraging the vision-language model, this module provides implicit geometric knowledge and explicit semantic information from the language. Furthermore, we present a Tri-plane Fusion Mamba (TFM) block to efficiently fuse the 3D language feature and 3D vision feature. The proposed module not only fuses the two features with global modeling but also avoids too much computation costs. Experiments on the SemanticKITTI and SSCBench-KITTI360 datasets show that our algorithm achieves new state-of-the-art performances in both geometric and semantic completion tasks. Our code will be open soon. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 516,065 |
1705.04254 | Community Detection in Signed Networks: an Error-Correcting Code
Approach | In this paper, we consider the community detection problem in signed networks, where there are two types of edges: positive edges (friends) and negative edges (enemies). One renowned theorem of signed networks, known as Harary's theorem, states that structurally balanced signed networks are clusterable. By viewing each cycle in a signed network as a parity-check constraint, we show that the community detection problem in a signed network with two communities is equivalent to the decoding problem for a parity-check code. We also show how one can use two renowned decoding algorithms in error- correcting codes for community detection in signed networks: the bit-flipping algorithm, and the belief propagation algorithm. In addition to these two algorithms, we also propose a new community detection algorithm, called the Hamming distance algorithm, that performs community detection by finding a codeword that minimizes the Hamming distance. We compare the performance of these three algorithms by conducting various experiments with known ground truth. Our experimental results show that our Hamming distance algorithm outperforms the other two. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 73,298 |
2205.09852 | Deconfounding Actor-Critic Network with Policy Adaptation for Dynamic
Treatment Regimes | Despite intense efforts in basic and clinical research, an individualized ventilation strategy for critically ill patients remains a major challenge. Recently, dynamic treatment regime (DTR) with reinforcement learning (RL) on electronic health records (EHR) has attracted interest from both the healthcare industry and machine learning research community. However, most learned DTR policies might be biased due to the existence of confounders. Although some treatment actions non-survivors received may be helpful, if confounders cause the mortality, the training of RL models guided by long-term outcomes (e.g., 90-day mortality) would punish those treatment actions causing the learned DTR policies to be suboptimal. In this study, we develop a new deconfounding actor-critic network (DAC) to learn optimal DTR policies for patients. To alleviate confounding issues, we incorporate a patient resampling module and a confounding balance module into our actor-critic framework. To avoid punishing the effective treatment actions non-survivors received, we design a short-term reward to capture patients' immediate health state changes. Combining short-term with long-term rewards could further improve the model performance. Moreover, we introduce a policy adaptation method to successfully transfer the learned model to new-source small-scale datasets. The experimental results on one semi-synthetic and two different real-world datasets show the proposed model outperforms the state-of-the-art models. The proposed model provides individualized treatment decisions for mechanical ventilation that could improve patient outcomes. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 297,444 |
1802.03692 | Nearly Optimal Adaptive Procedure with Change Detection for
Piecewise-Stationary Bandit | Multi-armed bandit (MAB) is a class of online learning problems where a learning agent aims to maximize its expected cumulative reward while repeatedly selecting to pull arms with unknown reward distributions. We consider a scenario where the reward distributions may change in a piecewise-stationary fashion at unknown time steps. We show that by incorporating a simple change-detection component with classic UCB algorithms to detect and adapt to changes, our so-called M-UCB algorithm can achieve nearly optimal regret bound on the order of $O(\sqrt{MKT\log T})$, where $T$ is the number of time steps, $K$ is the number of arms, and $M$ is the number of stationary segments. Comparison with the best available lower bound shows that our M-UCB is nearly optimal in $T$ up to a logarithmic factor. We also compare M-UCB with the state-of-the-art algorithms in numerical experiments using a public Yahoo! dataset to demonstrate its superior performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 90,045 |
2211.06975 | Ground Truth Inference for Weakly Supervised Entity Matching | Entity matching (EM) refers to the problem of identifying pairs of data records in one or more relational tables that refer to the same entity in the real world. Supervised machine learning (ML) models currently achieve state-of-the-art matching performance; however, they require many labeled examples, which are often expensive or infeasible to obtain. This has inspired us to approach data labeling for EM using weak supervision. In particular, we use the labeling function abstraction popularized by Snorkel, where each labeling function (LF) is a user-provided program that can generate many noisy match/non-match labels quickly and cheaply. Given a set of user-written LFs, the quality of data labeling depends on a labeling model to accurately infer the ground-truth labels. In this work, we first propose a simple but powerful labeling model for general weak supervision tasks. Then, we tailor the labeling model specifically to the task of entity matching by considering the EM-specific transitivity property. The general form of our labeling model is simple while substantially outperforming the best existing method across ten general weak supervision datasets. To tailor the labeling model for EM, we formulate an approach to ensure that the final predictions of the labeling model satisfy the transitivity property required in EM, utilizing an exact solution where possible and an ML-based approximation in remaining cases. On two single-table and nine two-table real-world EM datasets, we show that our labeling model results in a 9% higher F1 score on average than the best existing method. We also show that a deep learning EM end model (DeepMatcher) trained on labels generated from our weak supervision approach is comparable to an end model trained using tens of thousands of ground-truth labels, demonstrating that our approach can significantly reduce the labeling efforts required in EM. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | 330,087 |
2206.14011 | Taxonomy and evolution predicting using deep learning in images | Molecular and morphological characters, as important parts of biological taxonomy, are contradictory but need to be integrated. Organism's image recognition and bioinformatics are emerging and hot problems nowadays but with a gap between them. In this work, a multi-branching recognition framework mediated by genetic information bridges this barrier, which establishes the link between macro-morphology and micro-molecular information of mushrooms. The novel multi-perspective structure is proposed to fuse the feature images from three branching models, which significantly improves the accuracy of recognition by about 10% and up to more than 90%. Further, genetic information is implemented to the mushroom image recognition task by using genetic distance embeddings as the representation space for predicting image distance and species identification. Semantic overfitting of traditional classification tasks and the granularity of fine-grained image recognition are also discussed in depth for the first time. The generalizability of the model was investigated in fine-grained scenarios using zero-shot learning tasks, which could predict the taxonomic and evolutionary information of unseen samples. We presented the first method to map images to DNA, namely used an encoder mapping image to genetic distances, and then decoded DNA through a pre-trained decoder, where the total test accuracy on 37 species for DNA prediction is 87.45%. This study creates a novel recognition framework by systematically studying the mushroom image recognition problem, bridging the gap between macroscopic biological information and microscopic molecular information, which will provide a new reference for intelligent biometrics in the future. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 305,156 |
2207.08298 | Towards Programmable Memory Controller for Tensor Decomposition | Tensor decomposition has become an essential tool in many data science applications. Sparse Matricized Tensor Times Khatri-Rao Product (MTTKRP) is the pivotal kernel in tensor decomposition algorithms that decompose higher-order real-world large tensors into multiple matrices. Accelerating MTTKRP can speed up the tensor decomposition process immensely. Sparse MTTKRP is a challenging kernel to accelerate due to its irregular memory access characteristics. Implementing accelerators on Field Programmable Gate Array (FPGA) for kernels such as MTTKRP is attractive due to the energy efficiency and the inherent parallelism of FPGA. This paper explores the opportunities, key challenges, and an approach for designing a custom memory controller on FPGA for MTTKRP while exploring the parameter space of such a custom memory controller. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 308,533 |
2408.11619 | Data-driven Modeling of Combined Sewer Systems for Urban Sustainability:
An Empirical Evaluation | Climate change poses complex challenges, with extreme weather events becoming increasingly frequent and difficult to model. Examples include the dynamics of Combined Sewer Systems (CSS). Overburdened CSS during heavy rainfall will overflow untreated wastewater into surface water bodies. Classical approaches to modeling the impact of extreme rainfall events rely on physical simulations, which are particularly challenging to create for large urban infrastructures. Deep Learning (DL) models offer a cost-effective alternative for modeling the complex dynamics of sewer systems. In this study, we present a comprehensive empirical evaluation of several state-of-the-art DL time series models for predicting sewer system dynamics in a large urban infrastructure, utilizing three years of measurement data. We especially investigate the potential of DL models to maintain predictive precision during network outages by comparing global models, which have access to all variables within the sewer system, and local models, which are limited to data from a restricted set of local sensors. Our findings demonstrate that DL models can accurately predict the dynamics of sewer system load, even under network outage conditions. These results suggest that DL models can effectively aid in balancing the load redistribution in CSS, thereby enhancing the sustainability and resilience of urban infrastructures. | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | 482,366 |
2410.03754 | Enhancing Retrieval in QA Systems with Derived Feature Association | Retrieval augmented generation (RAG) has become the standard in long context question answering (QA) systems. However, typical implementations of RAG rely on a rather naive retrieval mechanism, in which texts whose embeddings are most similar to that of the query are deemed most relevant. This has consequences in subjective QA tasks, where the most relevant text may not directly contain the answer. In this work, we propose a novel extension to RAG systems, which we call Retrieval from AI Derived Documents (RAIDD). RAIDD leverages the full power of the LLM in the retrieval process by deriving inferred features, such as summaries and example questions, from the documents at ingest. We demonstrate that this approach significantly improves the performance of RAG systems on long-context QA tasks. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 494,942 |
2311.08851 | Data Augmentations in Deep Weight Spaces | Learning in weight spaces, where neural networks process the weights of other deep neural networks, has emerged as a promising research direction with applications in various fields, from analyzing and editing neural fields and implicit neural representations, to network pruning and quantization. Recent works designed architectures for effective learning in that space, which takes into account its unique, permutation-equivariant, structure. Unfortunately, so far these architectures suffer from severe overfitting and were shown to benefit from large datasets. This poses a significant challenge because generating data for this learning setup is laborious and time-consuming since each data sample is a full set of network weights that has to be trained. In this paper, we address this difficulty by investigating data augmentations for weight spaces, a set of techniques that enable generating new data examples on the fly without having to train additional input weight space elements. We first review several recently proposed data augmentation schemes %that were proposed recently and divide them into categories. We then introduce a novel augmentation scheme based on the Mixup method. We evaluate the performance of these techniques on existing benchmarks as well as new benchmarks we generate, which can be valuable for future studies. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 407,896 |
1805.01278 | Learning Pretopological Spaces to Model Complex Propagation Phenomena: A
Multiple Instance Learning Approach Based on a Logical Modeling | This paper addresses the problem of learning the concept of "propagation" in the pretopology theoretical formalism. Our proposal is first to define the pseudo-closure operator (modeling the propagation concept) as a logical combination of neighborhoods. We show that learning such an operator lapses into the Multiple Instance (MI) framework, where the learning process is performed on bags of instances instead of individual instances. Though this framework is well suited for this task, its use for learning a pretopological space leads to a set of bags exponential in size. To overcome this issue we thus propose a learning method based on a low estimation of the bags covered by a concept under construction. As an experiment, percolation processes (forest fires typically) are simulated and the corresponding propagation models are learned based on a subset of observations. It reveals that the proposed MI approach is significantly more efficient on the task of propagation model recognition than existing methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 96,625 |
2408.10373 | Competing Social Contagions with Opinion Dependent Infectivity | The spread of disinformation (maliciously spread false information) in online social networks has become an important problem in today's society. Disinformation's spread is facilitated by the fact that individuals often accept false information based on cognitive biases which predispose them to believe information that they have heard repeatedly or that aligns with their beliefs. Moreover, disinformation often spreads in direct competition with a corresponding true information. To model these phenomena, we develop a model for two competing beliefs spreading on a social network, where individuals have an internal opinion that models their cognitive biases and modulates their likelihood of adopting one of the competing beliefs. By numerical simulations of an agent-based model and a mean-field description of the dynamics, we study how the long-term dynamics of the spreading process depends on the initial conditions for the number of spreaders and the initial opinion of the population. We find that the addition of cognitive biases enriches the transient dynamics of the spreading process, facilitating behavior such as the revival of a dying belief and the overturning of an initially widespread opinion. Finally, we study how external recruitment of spreaders can lead to the eventual dominance of one of the two beliefs. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 481,824 |
2008.02725 | A Sensitivity Analysis Approach for Evaluating a Radar Simulation for
Virtual Testing of Autonomous Driving Functions | Simulation-based testing is a promising approach to significantly reduce the validation effort of automated driving functions. Realistic models of environment perception sensors such as camera, radar and lidar play a key role in this testing strategy. A generally accepted method to validate these sensor models does not yet exist. Particularly radar has traditionally been one of the most difficult sensors to model. Although promising as an alternative to real test drives, virtual tests are time-consuming due to the fact that they simulate the entire radar system in detail, using computation-intensive simulation techniques to approximate the propagation of electromagnetic waves. In this paper, we introduce a sensitivity analysis approach for developing and evaluating a radar simulation, with the objective to identify the parameters with the greatest impact regarding the system under test. A modular radar system simulation is presented and parameterized to conduct a sensitivity analysis in order to evaluate a spatial clustering algorithm as the system under test, while comparing the output from the radar model to real driving measurements to ensure a realistic model behavior. The presented approach is evaluated and it is demonstrated that with this approach results from different situations can be traced back to the contribution of the individual sub-modules of the radar simulation. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 190,691 |
1902.05194 | Non-contact photoplethysmogram and instantaneous heart rate estimation
from infrared face video | Extracting the instantaneous heart rate (iHR) from face videos has been well studied in recent years. It is well known that changes in skin color due to blood flow can be captured using conventional cameras. One of the main limitations of methods that rely on this principle is the need of an illumination source. Moreover, they have to be able to operate under different light conditions. One way to avoid these constraints is using infrared cameras, allowing the monitoring of iHR under low light conditions. In this work, we present a simple, principled signal extraction method that recovers the iHR from infrared face videos. We tested the procedure on 7 participants, for whom we recorded an electrocardiogram simultaneously with their infrared face video. We checked that the recovered signal matched the ground truth iHR, showing that infrared is a promising alternative to conventional video imaging for heart rate monitoring, especially in low light conditions. Code is available at https://github.com/natalialmg/IR_iHR | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 121,494 |
2003.03954 | Probabilistic Egocentric Motion Correction of Lidar Point Cloud and
Projection to Camera Images for Moving Platforms | The fusion of sensor data from heterogeneous sensors is crucial for robust perception in various robotics applications that involve moving platforms, for instance, autonomous vehicle navigation. In particular, combining camera and lidar sensors enables the projection of precise range information of the surrounding environment onto visual images. It also makes it possible to label each lidar point with visual segmentation/classification results for 3D mapping, which facilitates a higher level understanding of the scene. The task is however considered non-trivial due to intrinsic and extrinsic sensor calibration, and the distortion of lidar points resulting from the ego-motion of the platform. Despite the existence of many lidar ego-motion correction methods, the errors in the correction process due to uncertainty in ego-motion estimation are not possible to remove completely. It is thus essential to consider the problem a probabilistic process where the ego-motion estimation uncertainty is modelled and considered consistently. The paper investigates the probabilistic lidar ego-motion correction and lidar-to-camera projection, where both the uncertainty in the ego-motion estimation and time jitter in sensory measurements are incorporated. The proposed approach is validated both in simulation and using real-world data collected from an electric vehicle retrofitted with wide-angle cameras and a 16-beam scanning lidar. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 167,415 |
2303.02851 | A Survey on Incremental Update for Neural Recommender Systems | Recommender Systems (RS) aim to provide personalized suggestions of items for users against consumer over-choice. Although extensive research has been conducted to address different aspects and challenges of RS, there still exists a gap between academic research and industrial applications. Specifically, most of the existing models still work in an offline manner, in which the recommender is trained on a large static training set and evaluated on a very restrictive testing set in a one-time process. RS will stay unchanged until the next batch retrain is performed. We frame such RS as Batch Update Recommender Systems (BURS). In reality, they have to face the challenges where RS are expected to be instantly updated with new data streaming in, and generate updated recommendations for current user activities based on the newly arrived data. We frame such RS as Incremental Update Recommender Systems (IURS). In this article, we offer a systematic survey of incremental update for neural recommender systems. We begin the survey by introducing key concepts and formulating the task of IURS. We then illustrate the challenges in IURS compared with traditional BURS. Afterwards, we detail the introduction of existing literature and evaluation issues. We conclude the survey by outlining some prominent open research issues in this area. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 349,504 |
2108.01802 | Deformation Recovery Control and Post-Impact Trajectory Replanning for
Collision-Resilient Mobile Robots | The paper focuses on collision-inclusive motion planning for impact-resilient mobile robots. We propose a new deformation recovery and replanning strategy to handle collisions that may occur at run-time. Contrary to collision avoidance methods that generate trajectories only in conservative local space or require collision checking that has high computational cost, our method directly generates (local) trajectories with imposing only waypoint constraints. If a collision occurs, our method then estimates the post-impact state and computes from there an intermediate waypoint to recover from the collision. To achieve so, we develop two novel components: 1) a deformation recovery controller that optimizes the robot's states during post-impact recovery phase, and 2) a post-impact trajectory replanner that adjusts the next waypoint with the information from the collision for the robot to pass through and generates a polynomial-based minimum effort trajectory. The proposed strategy is evaluated experimentally with an omni-directional impact-resilient wheeled robot. The robot is designed in house, and it can perceive collisions with the aid of Hall effect sensors embodied between the robot's main chassis and a surrounding deflection ring-like structure. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 249,128 |
1108.5703 | Web Pages Clustering: A New Approach | The rapid growth of web has resulted in vast volume of information. Information availability at a rapid speed to the user is vital. English language (or any for that matter) has lot of ambiguity in the usage of words. So there is no guarantee that a keyword based search engine will provide the required results. This paper introduces the use of dictionary (standardised) to obtain the context with which a keyword is used and in turn cluster the results based on this context. These ideas can be merged with a metasearch engine to enhance the search efficiency. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 11,856 |
2211.10355 | Discriminating sensor activation in activity recognition within
multi-occupancy environments based on nearby interaction | This work presents a computer model to discriminate sensor activation in multi-occupancy environments based on proximity interaction. Current proximity-based and indoor location methods allow the estimation of the positions or areas where inhabitants carry out their daily human activities. The spatial-temporal relation between location and sensor activations is described in this work to generate a sensor interaction matrix for each inhabitant. This enables the use of classical HAR models to reduce the complexity of the multi-occupancy problem. A case study deployed with UWB and binary sensors is presented. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 331,284 |
2207.01822 | Ensemble feature selection with data-driven thresholding for Alzheimer's
disease biomarker discovery | Healthcare datasets present many challenges to both machine learning and statistics as their data are typically heterogeneous, censored, high-dimensional and have missing information. Feature selection is often used to identify the important features but can produce unstable results when applied to high-dimensional data, selecting a different set of features on each iteration. The stability of feature selection can be improved with the use of feature selection ensembles, which aggregate the results of multiple base feature selectors. A threshold must be applied to the final aggregated feature set to separate the relevant features from the redundant ones. A fixed threshold, which is typically applied, offers no guarantee that the final set of selected features contains only relevant features. This work develops several data-driven thresholds to automatically identify the relevant features in an ensemble feature selector and evaluates their predictive accuracy and stability. To demonstrate the applicability of these methods to clinical data, they are applied to data from two real-world Alzheimer's disease (AD) studies. AD is a progressive neurodegenerative disease with no known cure, that begins at least 2-3 decades before overt symptoms appear, presenting an opportunity for researchers to identify early biomarkers that might identify patients at risk of developing AD. Features identified by applying these methods to both datasets reflect current findings in the AD literature. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 306,307 |
2401.08404 | Training and Comparison of nnU-Net and DeepMedic Methods for
Autosegmentation of Pediatric Brain Tumors | Brain tumors are the most common solid tumors and the leading cause of cancer-related death among children. Tumor segmentation is essential in surgical and treatment planning, and response assessment and monitoring. However, manual segmentation is time-consuming and has high inter-operator variability, underscoring the need for more efficient methods. We compared two deep learning-based 3D segmentation models, DeepMedic and nnU-Net, after training with pediatric-specific multi-institutional brain tumor data using based on multi-parametric MRI scans.Multi-parametric preoperative MRI scans of 339 pediatric patients (n=293 internal and n=46 external cohorts) with a variety of tumor subtypes, were preprocessed and manually segmented into four tumor subregions, i.e., enhancing tumor (ET), non-enhancing tumor (NET), cystic components (CC), and peritumoral edema (ED). After training, performance of the two models on internal and external test sets was evaluated using Dice scores, sensitivity, and Hausdorff distance with reference to ground truth manual segmentations. Dice score for nnU-Net internal test sets was (mean +/- SD (median)) 0.9+/-0.07 (0.94) for WT, 0.77+/-0.29 for ET, 0.66+/-0.32 for NET, 0.71+/-0.33 for CC, and 0.71+/-0.40 for ED, respectively. For DeepMedic the Dice scores were 0.82+/-0.16 for WT, 0.66+/-0.32 for ET, 0.48+/-0.27, for NET, 0.48+/-0.36 for CC, and 0.19+/-0.33 for ED, respectively. Dice scores were significantly higher for nnU-Net (p<=0.01). External validation of the trained nnU-Net model on the multi-institutional BraTS-PEDs 2023 dataset revealed high generalization capability in segmentation of whole tumor and tumor core with Dice scores of 0.87+/-0.13 (0.91) and 0.83+/-0.18 (0.89), respectively. Pediatric-specific data trained nnU-Net model is superior to DeepMedic for whole tumor and subregion segmentation of pediatric brain tumors. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 421,873 |
2210.15231 | Unsupervised Boundary-Aware Language Model Pretraining for Chinese
Sequence Labeling | Boundary information is critical for various Chinese language processing tasks, such as word segmentation, part-of-speech tagging, and named entity recognition. Previous studies usually resorted to the use of a high-quality external lexicon, where lexicon items can offer explicit boundary information. However, to ensure the quality of the lexicon, great human effort is always necessary, which has been generally ignored. In this work, we suggest unsupervised statistical boundary information instead, and propose an architecture to encode the information directly into pre-trained language models, resulting in Boundary-Aware BERT (BABERT). We apply BABERT for feature induction of Chinese sequence labeling tasks. Experimental results on ten benchmarks of Chinese sequence labeling demonstrate that BABERT can provide consistent improvements on all datasets. In addition, our method can complement previous supervised lexicon exploration, where further improvements can be achieved when integrated with external lexicon information. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 326,869 |
2202.01380 | Learning Mechanically Driven Emergent Behavior with Message Passing
Neural Networks | From designing architected materials to connecting mechanical behavior across scales, computational modeling is a critical tool in solid mechanics. Recently, there has been a growing interest in using machine learning to reduce the computational cost of physics-based simulations. Notably, while machine learning approaches that rely on Graph Neural Networks (GNNs) have shown success in learning mechanics, the performance of GNNs has yet to be investigated on a myriad of solid mechanics problems. In this work, we examine the ability of GNNs to predict a fundamental aspect of mechanically driven emergent behavior: the connection between a column's geometric structure and the direction that it buckles. To accomplish this, we introduce the Asymmetric Buckling Columns (ABC) dataset, a dataset comprised of three sub-datasets of asymmetric and heterogeneous column geometries where the goal is to classify the direction of symmetry breaking (left or right) under compression after the onset of instability. Because of complex local geometry, the "image-like" data representations required for implementing standard convolutional neural network based metamodels are not ideal, thus motivating the use of GNNs. In addition to investigating GNN model architecture, we study the effect of different input data representation approaches, data augmentation, and combining multiple models as an ensemble. While we were able to obtain good results, we also showed that predicting solid mechanics based emergent behavior is non-trivial. Because both our model implementation and dataset are distributed under open-source licenses, we hope that future researchers can build on our work to create enhanced mechanics-specific machine learning pipelines for capturing the behavior of complex geometric structures. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 278,463 |
1910.06960 | Deep Learning for Massive MIMO with 1-Bit ADCs: When More Antennas Need
Fewer Pilots | This paper considers uplink massive MIMO systems with 1-bit analog-to-digital converters (ADCs) and develops a deep-learning based channel estimation framework. In this framework, the prior channel estimation observations and deep neural network models are leveraged to learn the non-trivial mapping from quantized received measurements to channels. For that, we derive the sufficient length and structure of the pilot sequence to guarantee the existence of this mapping function. This leads to the interesting, and \textit{counter-intuitive}, observation that when more antennas are employed by the massive MIMO base station, our proposed deep learning approach achieves better channel estimation performance, for the same pilot sequence length. Equivalently, for the same channel estimation performance, this means that when more antennas are employed, fewer pilots are required. This observation is also analytically proved for some special channel models. Simulation results confirm our observations and show that more antennas lead to better channel estimation both in terms of the normalized mean squared error and the achievable signal-to-noise ratio per antenna. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 149,491 |
1701.02426 | Scene Graph Generation by Iterative Message Passing | Understanding a visual scene goes beyond recognizing individual objects in isolation. Relationships between objects also constitute rich semantic information about the scene. In this work, we explicitly model the objects and their relationships using scene graphs, a visually-grounded graphical structure of an image. We propose a novel end-to-end model that generates such structured scene representation from an input image. The model solves the scene graph inference problem using standard RNNs and learns to iteratively improves its predictions via message passing. Our joint inference model can take advantage of contextual cues to make better predictions on objects and their relationships. The experiments show that our model significantly outperforms previous methods for generating scene graphs using Visual Genome dataset and inferring support relations with NYU Depth v2 dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 66,551 |
2205.09717 | Flexible Modeling and Multitask Learning using Differentiable Tree
Ensembles | Decision tree ensembles are widely used and competitive learning models. Despite their success, popular toolkits for learning tree ensembles have limited modeling capabilities. For instance, these toolkits support a limited number of loss functions and are restricted to single task learning. We propose a flexible framework for learning tree ensembles, which goes beyond existing toolkits to support arbitrary loss functions, missing responses, and multi-task learning. Our framework builds on differentiable (a.k.a. soft) tree ensembles, which can be trained using first-order methods. However, unlike classical trees, differentiable trees are difficult to scale. We therefore propose a novel tensor-based formulation of differentiable trees that allows for efficient vectorization on GPUs. We perform experiments on a collection of 28 real open-source and proprietary datasets, which demonstrate that our framework can lead to 100x more compact and 23% more expressive tree ensembles than those by popular toolkits. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 297,390 |
2501.10030 | Informativity Conditions for Multiple Signals: Properties, Experimental
Design, and Applications | Recent studies highlight the importance of persistently exciting condition in single signal sequence for model identification and data-driven control methodologies. However, maintaining prolonged excitation in control signals introduces significant challenges, as continuous excitation can reduce the lifetime of mechanical devices. In this paper, we introduce three informativity conditions for various types of multi-signal data, each augmented by weight factors. We explore the interrelations between these conditions and their rank properties in linear time-invariant systems. Furthermore, we introduce open-loop experimental design methods tailored to each of the three conditions, which can synthesize the required excitation conditions either offline or online, even in the presence of limited information within each signal segment. We demonstrate the effectiveness of these informativity conditions in least-squares identification. Additionally, all three conditions can extend Willems' fundamental lemma and are utilized to assess the properties of the system. Illustrative examples confirm that these conditions yield satisfactory outcomes in both least-squares identification and the construction of data-driven controllers. | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | 525,361 |
2310.10666 | Extracting Physical Causality from Measurements to Detect and Localize
False Data Injection Attacks | False Data Injection Attack (FDIA) has become a growing concern in modern cyber-physical power systems. Most existing FDIA detection techniques project the raw measurement data into a high-dimensional latent space to separate normal and attacked samples. These approaches focus more on the statistical correlations of data values and are therefore susceptible to data distribution drifts induced by changes in system operating points or changes in FDIA types and strengths, especially for FDIA localization tasks. Causal inference, on the other hand, extracts the causality behind the coordinated fluctuations of different measurements. The causality patterns are determined by fundamental physical laws such as Ohm's Law and Kirchhoff's Law. They are sensitive to the violation of physical laws caused by FDIA, but tend to remain stable with the drift of system operating points. Leveraging this advantage, this paper proposes a joint FDIA detection and localization framework based on causal inference and the Graph Attention Network (GAT) to identify the attacked system nodes. The proposed framework consists of two levels. The lower level uses the X-learner algorithm to estimate the causality strength between measurements and generate Measurement Causality Graphs (MCGs). The upper level then applies a GAT to identify the anomaly patterns in the MCGs. Since the extracted causality patterns are intrinsically related to the measurements, it is easier for the upper level to figure out the attacked nodes than the existing FDIA localization approaches. The performance of the proposed framework is evaluated on the IEEE 39-bus system. Experimental results show that the causality-based FDIA detection and localization mechanism is highly interpretable and robust. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 400,336 |
2302.07455 | A lightweight network for photovoltaic cell defect detection in
electroluminescence images based on neural architecture search and knowledge
distillation | Nowadays, the rapid development of photovoltaic(PV) power stations requires increasingly reliable maintenance and fault diagnosis of PV modules in the field. Due to the effectiveness, convolutional neural network (CNN) has been widely used in the existing automatic defect detection of PV cells. However, the parameters of these CNN-based models are very large, which require stringent hardware resources and it is difficult to be applied in actual industrial projects. To solve these problems, we propose a novel lightweight high-performance model for automatic defect detection of PV cells in electroluminescence(EL) images based on neural architecture search and knowledge distillation. To auto-design an effective lightweight model, we introduce neural architecture search to the field of PV cell defect classification for the first time. Since the defect can be any size, we design a proper search structure of network to better exploit the multi-scale characteristic. To improve the overall performance of the searched lightweight model, we further transfer the knowledge learned by the existing pre-trained large-scale model based on knowledge distillation. Different kinds of knowledge are exploited and transferred, including attention information, feature information, logit information and task-oriented information. Experiments have demonstrated that the proposed model achieves the state-of-the-art performance on the public PV cell dataset of EL images under online data augmentation with accuracy of 91.74% and the parameters of 1.85M. The proposed lightweight high-performance model can be easily deployed to the end devices of the actual industrial projects and retain the accuracy. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 345,738 |
1106.1684 | Max-Margin Stacking and Sparse Regularization for Linear Classifier
Combination and Selection | The main principle of stacked generalization (or Stacking) is using a second-level generalizer to combine the outputs of base classifiers in an ensemble. In this paper, we investigate different combination types under the stacking framework; namely weighted sum (WS), class-dependent weighted sum (CWS) and linear stacked generalization (LSG). For learning the weights, we propose using regularized empirical risk minimization with the hinge loss. In addition, we propose using group sparsity for regularization to facilitate classifier selection. We performed experiments using two different ensemble setups with differing diversities on 8 real-world datasets. Results show the power of regularized learning with the hinge loss function. Using sparse regularization, we are able to reduce the number of selected classifiers of the diverse ensemble without sacrificing accuracy. With the non-diverse ensembles, we even gain accuracy on average by using sparse regularization. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 10,773 |
2009.05403 | Semantic Segmentation of Histopathological Slides for the Classification
of Cutaneous Lymphoma and Eczema | Mycosis fungoides (MF) is a rare, potentially life threatening skin disease, which in early stages clinically and histologically strongly resembles Eczema, a very common and benign skin condition. In order to increase the survival rate, one needs to provide the appropriate treatment early on. To this end, one crucial step for specialists is the evaluation of histopathological slides (glass slides), or Whole Slide Images (WSI), of the patients' skin tissue. We introduce a deep learning aided diagnostics tool that brings a two-fold value to the decision process of pathologists. First, our algorithm accurately segments WSI into regions that are relevant for an accurate diagnosis, achieving a Mean-IoU of 69% and a Matthews Correlation score of 83% on a novel dataset. Additionally, we also show that our model is competitive with the state of the art on a reference dataset. Second, using the segmentation map and the original image, we are able to predict if a patient has MF or Eczema. We created two models that can be applied in different stages of the diagnostic pipeline, potentially eliminating life-threatening mistakes. The classification outcome is considerably more interpretable than using only the WSI as the input, since it is also based on the segmentation map. Our segmentation model, which we call EU-Net, extends a classical U-Net with an EfficientNet-B7 encoder which was pre-trained on the Imagenet dataset. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 195,312 |
2012.07381 | Influence-Driven Data Poisoning in Graph-Based Semi-Supervised
Classifiers | Graph-based Semi-Supervised Learning (GSSL) is a practical solution to learn from a limited amount of labelled data together with a vast amount of unlabelled data. However, due to their reliance on the known labels to infer the unknown labels, these algorithms are sensitive to data quality. It is therefore essential to study the potential threats related to the labelled data, more specifically, label poisoning. In this paper, we propose a novel data poisoning method which efficiently approximates the result of label inference to identify the inputs which, if poisoned, would produce the highest number of incorrectly inferred labels. We extensively evaluate our approach on three classification problems under 24 different experimental settings each. Compared to the state of the art, our influence-driven attack produces an average increase of error rate 50\% higher, while being faster by multiple orders of magnitude. Moreover, our method can inform engineers of inputs that deserve investigation (relabelling them) before training the learning model. We show that relabelling one-third of the poisoned inputs (selected based on their influence) reduces the poisoning effect by 50\%. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 211,436 |
2409.13448 | Co-Optimization of Tool Orientations, Kinematic Redundancy, and Waypoint
Timing for Robot-Assisted Manufacturing | In this paper, we present a concurrent and scalable trajectory optimization method to improve the quality of robot-assisted manufacturing. Our method simultaneously optimizes tool orientations, kinematic redundancy, and waypoint timing on input toolpaths with large numbers of waypoints to improve kinematic smoothness while incorporating manufacturing constraints. Differently, existing methods always determine them in a decoupled manner. To deal with the large number of waypoints on a toolpath, we propose a decomposition-based numerical scheme to optimize the trajectory in an out-of-core manner, which can also run in parallel to improve the efficiency. Simulations and physical experiments have been conducted to demonstrate the performance of our method in examples of robot-assisted additive manufacturing. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 489,997 |
2111.14825 | Latent Transformations via NeuralODEs for GAN-based Image Editing | Recent advances in high-fidelity semantic image editing heavily rely on the presumably disentangled latent spaces of the state-of-the-art generative models, such as StyleGAN. Specifically, recent works show that it is possible to achieve decent controllability of attributes in face images via linear shifts along with latent directions. Several recent methods address the discovery of such directions, implicitly assuming that the state-of-the-art GANs learn the latent spaces with inherently linearly separable attribute distributions and semantic vector arithmetic properties. In our work, we show that nonlinear latent code manipulations realized as flows of a trainable Neural ODE are beneficial for many practical non-face image domains with more complex non-textured factors of variation. In particular, we investigate a large number of datasets with known attributes and demonstrate that certain attribute manipulations are challenging to obtain with linear shifts only. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 268,721 |
2307.07344 | Inverse Evolution Layers: Physics-informed Regularizers for Deep Neural
Networks | Traditional image processing methods employing partial differential equations (PDEs) offer a multitude of meaningful regularizers, along with valuable theoretical foundations for a wide range of image-related tasks. This makes their integration into neural networks a promising avenue. In this paper, we introduce a novel regularization approach inspired by the reverse process of PDE-based evolution models. Specifically, we propose inverse evolution layers (IELs), which serve as bad property amplifiers to penalize neural networks of which outputs have undesired characteristics. Using IELs, one can achieve specific regularization objectives and endow neural networks' outputs with corresponding properties of the PDE models. Our experiments, focusing on semantic segmentation tasks using heat-diffusion IELs, demonstrate their effectiveness in mitigating noisy label effects. Additionally, we develop curve-motion IELs to enforce convex shape regularization in neural network-based segmentation models for preventing the generation of concave outputs. Theoretical analysis confirms the efficacy of IELs as an effective regularization mechanism, particularly in handling training with label issues. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 379,371 |
2303.05445 | Flooding with Absorption: An Efficient Protocol for Heterogeneous
Bandits over Complex Networks | Multi-armed bandits are extensively used to model sequential decision-making, making them ubiquitous in many real-life applications such as online recommender systems and wireless networking. We consider a multi-agent setting where each agent solves their own bandit instance endowed with a different set of arms. Their goal is to minimize their group regret while collaborating via some communication protocol over a given network. Previous literature on this problem only considered arm heterogeneity and networked agents separately. In this work, we introduce a setting that encompasses both features. For this novel setting, we first provide a rigorous regret analysis for a standard flooding protocol combined with the classic UCB policy. Then, to mitigate the issue of high communication costs incurred by flooding in complex networks, we propose a new protocol called Flooding with Absorption (FwA). We provide a theoretical analysis of the resulting regret bound and discuss the advantages of using FwA over flooding. Lastly, we experimentally verify on various scenarios, including dynamic networks, that FwA leads to significantly lower communication costs despite minimal regret performance loss compared to other network protocols. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 350,465 |
2008.11303 | Integrated Cutting and Packing Heterogeneous Precast Beams Multiperiod
Production Planning Problem | We introduce a novel variant of cutting production planning problems named Integrated Cutting and Packing Heterogeneous Precast Beams Multiperiod Production Planning (ICP-HPBMPP). We propose an integer linear programming model for the ICP-HPBMPP, as well as a lower bound for its optimal objective function value, which is empirically shown to be closer to the optimal solution value than the bound obtained from the linear relaxation of the model. We also propose a genetic algorithm approach for the ICP-HPBMPP as an alternative solution method. We discuss computational experiments and propose a parameterization for the genetic algorithm using D-optimal experimental design. We observe good performance of the exact approach when solving small-sized instances, although there are difficulties in finding optimal solutions for medium and large-sized problems, or even in finding feasible solutions for large instances. On the other hand, the genetic algorithm could find good-quality solutions for large-sized instances within short computing times. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 193,233 |
2010.07810 | Does Data Augmentation Benefit from Split BatchNorms | Data augmentation has emerged as a powerful technique for improving the performance of deep neural networks and led to state-of-the-art results in computer vision. However, state-of-the-art data augmentation strongly distorts training images, leading to a disparity between examples seen during training and inference. In this work, we explore a recently proposed training paradigm in order to correct for this disparity: using an auxiliary BatchNorm for the potentially out-of-distribution, strongly augmented images. Our experiments then focus on how to define the BatchNorm parameters that are used at evaluation. To eliminate the train-test disparity, we experiment with using the batch statistics defined by clean training images only, yet surprisingly find that this does not yield improvements in model performance. Instead, we investigate using BatchNorm parameters defined by weak augmentations and find that this method significantly improves the performance of common image classification benchmarks such as CIFAR-10, CIFAR-100, and ImageNet. We then explore a fundamental trade-off between accuracy and robustness coming from using different BatchNorm parameters, providing greater insight into the benefits of data augmentation on model performance. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 200,952 |
1909.07750 | MDP Playground: An Analysis and Debug Testbed for Reinforcement Learning | We present MDP Playground, a testbed for Reinforcement Learning (RL) agents with dimensions of hardness that can be controlled independently to challenge agents in different ways and obtain varying degrees of hardness in toy and complex RL environments. We consider and allow control over a wide variety of dimensions, including delayed rewards, sequence lengths, reward density, stochasticity, image representations, irrelevant features, time unit, action range and more. We define a parameterised collection of fast-to-run toy environments in OpenAI Gym by varying these dimensions and propose to use these to understand agents better. We then show how to design experiments using MDP Playground to gain insights on the toy environments. We also provide wrappers that can inject many of these dimensions into any Gym environment. We experiment with these wrappers on Atari and Mujoco to allow for understanding the effects of these dimensions on environments that are more complex than the toy environments. We also compare the effect of the dimensions on the toy and complex environments. Finally, we show how to use MDP Playground to debug agents, to study the interaction of multiple dimensions and describe further use-cases. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 145,767 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.