id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.05884 | Learning Koopman Operators with Control Using Bi-level Optimization | The accurate modeling and control of nonlinear dynamical effects are crucial for numerous robotic systems. The Koopman formalism emerges as a valuable tool for linear control design in nonlinear systems within unknown environments. However, it still remains a challenging task to learn the Koopman operator with control from data, and in particular, the simultaneous identification of the Koopman linear dynamics and the mapping between the physical and Koopman states. Conventionally, the simultaneous learning of the dynamics and mapping is achieved via single-level optimization based on one-step or multi-step discrete-time predictions, but the learned model may lack model robustness, training efficiency, and/or long-term predictive accuracy. This paper presents a bi-level optimization framework that jointly learns the Koopman embedding mapping and Koopman dynamics with exact long-term dynamical constraints. Our formulation allows back-propagation in standard learning framework and the use of state-of-the-art optimizers, yielding more accurate and stable system prediction in long-time horizon over various applications compared to conventional methods. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 378,884 |
2408.06610 | CROME: Cross-Modal Adapters for Efficient Multimodal LLM | Multimodal Large Language Models (MLLMs) demonstrate remarkable image-language capabilities, but their widespread use faces challenges in cost-effective training and adaptation. Existing approaches often necessitate expensive language model retraining and limited adaptability. Additionally, the current focus on zero-shot performance improvements offers insufficient guidance for task-specific tuning. We propose CROME, an efficient vision-language instruction tuning framework. It features a novel gated cross-modal adapter that effectively combines visual and textual representations prior to input into a frozen LLM. This lightweight adapter, trained with minimal parameters, enables efficient cross-modal understanding. Notably, CROME demonstrates superior zero-shot performance on standard visual question answering and instruction-following benchmarks. Moreover, it yields fine-tuning with exceptional parameter efficiency, competing with task-specific specialist state-of-the-art methods. CROME demonstrates the potential of pre-LM alignment for building scalable, adaptable, and parameter-efficient multimodal models. | false | false | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | 480,263 |
2007.09762 | A Theory of Multiple-Source Adaptation with Limited Target Labeled Data | We present a theoretical and algorithmic study of the multiple-source domain adaptation problem in the common scenario where the learner has access only to a limited amount of labeled target data, but where the learner has at disposal a large amount of labeled data from multiple source domains. We show that a new family of algorithms based on model selection ideas benefits from very favorable guarantees in this scenario and discuss some theoretical obstacles affecting some alternative techniques. We also report the results of several experiments with our algorithms that demonstrate their practical effectiveness. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 188,059 |
1909.09143 | Leveraging User Engagement Signals For Entity Labeling in a Virtual
Assistant | Personal assistant AI systems such as Siri, Cortana, and Alexa have become widely used as a means to accomplish tasks through natural language commands. However, components in these systems generally rely on supervised machine learning algorithms that require large amounts of hand-annotated training data, which is expensive and time consuming to collect. The ability to incorporate unsupervised, weakly supervised, or distantly supervised data holds significant promise in overcoming this bottleneck. In this paper, we describe a framework that leverages user engagement signals (user behaviors that demonstrate a positive or negative response to content) to automatically create granular entity labels for training data augmentation. Strategies such as multi-task learning and validation using an external knowledge base are employed to incorporate the engagement annotated data and to boost the model's accuracy on a sequence labeling task. Our results show that learning from data automatically labeled by user engagement signals achieves significant accuracy gains in a production deep learning system, when measured on both the sequence labeling task as well as on user facing results produced by the system end-to-end. We believe this is the first use of user engagement signals to help generate training data for a sequence labeling task on a large scale, and can be applied in practical settings to speed up new feature deployment when little human annotated data is available. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 146,162 |
1604.01476 | Zadoff-Chu sequence design for random access initial uplink
synchronization | The autocorrelation of a Zadoff-Chu (ZC) sequence with a non-zero cyclically shifted version of itself is zero. Due to the interesting property, ZC sequences are widely used in the LTE air interface in the primary synchronization signal (PSS), random access preamble (PRACH), uplink control channel (PUCCH) etc. However, this interesting property of ZC sequence is not useful in the random access initial uplink synchronization problem due to some specific structures of the underlying problem. In particular, the state of the art uplink synchronization algorithms do not perform equally for all ZC sequences. In this work, we show a systematic procedure to choose the ZC sequences that yield the optimum performance of the uplink synchronization algorithms. At first, we show that the uplink synchronization is a sparse signal recovery problem on an overcomplete basis. Next, we use the theory of sparse recovery algorithms and identify a factor that controls performance of the algorithms. We then suggest a ZC sequence design procedure to optimally choose this factor. The simulation results show that the performance of most of the state of the art uplink synchronization algorithms improve significantly when the ZC sequences are chosen by using the proposed technique. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 54,203 |
2407.20413 | Through the Looking Glass, and what Horn Clause Programs Found There | Dual Horn clauses mirror key properties of Horn clauses. This paper explores the ``other side of the looking glass'' to reveal some expected and unexpected symmetries and their practical uses. We revisit Dual Horn clauses as enablers of a form of constructive negation that supports goal-driven forward reasoning and is valid both intuitionistically and classically. In particular, we explore the ability to falsify a counterfactual hypothesis in the context of a background theory expressed as a Dual Horn clause program. With Dual Horn clause programs, by contrast to negation as failure, the variable bindings in their computed answers provide explanations for the reasons why a statement is successfully falsified. Moreover, in the propositional case, by contrast to negation as failure as implemented with stable models semantics in ASP systems, and similarly to Horn clause programs, Dual Horn clause programs have polynomial complexity. After specifying their execution model with a metainterpreter, we devise a compilation scheme from Dual Horn clause programs to Horn clause programs, ensuring their execution with no performance penalty and we design the embedded SymLP language to support combined Horn clause and Dual Horn clause programs. As a (motivating) application, we cast LLM reasoning chains into propositional Horn and Dual Horn clauses that work together to constructively prove and disprove goals and enhance Generative AI with explainability of reasoning chains. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 477,154 |
2405.02525 | RLStop: A Reinforcement Learning Stopping Method for TAR | We present RLStop, a novel Technology Assisted Review (TAR) stopping rule based on reinforcement learning that helps minimise the number of documents that need to be manually reviewed within TAR applications. RLStop is trained on example rankings using a reward function to identify the optimal point to stop examining documents. Experiments at a range of target recall levels on multiple benchmark datasets (CLEF e-Health, TREC Total Recall, and Reuters RCV1) demonstrated that RLStop substantially reduces the workload required to screen a document collection for relevance. RLStop outperforms a wide range of alternative approaches, achieving performance close to the maximum possible for the task under some circumstances. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 451,797 |
2405.14226 | Variational Delayed Policy Optimization | In environments with delayed observation, state augmentation by including actions within the delay window is adopted to retrieve Markovian property to enable reinforcement learning (RL). However, state-of-the-art (SOTA) RL techniques with Temporal-Difference (TD) learning frameworks often suffer from learning inefficiency, due to the significant expansion of the augmented state space with the delay. To improve learning efficiency without sacrificing performance, this work introduces a novel framework called Variational Delayed Policy Optimization (VDPO), which reformulates delayed RL as a variational inference problem. This problem is further modelled as a two-step iterative optimization problem, where the first step is TD learning in the delay-free environment with a small state space, and the second step is behaviour cloning which can be addressed much more efficiently than TD learning. We not only provide a theoretical analysis of VDPO in terms of sample complexity and performance, but also empirically demonstrate that VDPO can achieve consistent performance with SOTA methods, with a significant enhancement of sample efficiency (approximately 50\% less amount of samples) in the MuJoCo benchmark. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 456,319 |
2102.00573 | A Secure Learning Control Strategy via Dynamic Camouflaging for Unknown
Dynamical Systems under Attacks | This paper presents a secure reinforcement learning (RL) based control method for unknown linear time-invariant cyber-physical systems (CPSs) that are subjected to compositional attacks such as eavesdropping and covert attack. We consider the attack scenario where the attacker learns about the dynamic model during the exploration phase of the learning conducted by the designer to learn a linear quadratic regulator (LQR), and thereafter, use such information to conduct a covert attack on the dynamic system, which we refer to as doubly learning-based control and attack (DLCA) framework. We propose a dynamic camouflaging based attack-resilient reinforcement learning (ARRL) algorithm which can learn the desired optimal controller for the dynamic system, and at the same time, can inject sufficient misinformation in the estimation of system dynamics by the attacker. The algorithm is accompanied by theoretical guarantees and extensive numerical experiments on a consensus multi-agent system and on a benchmark power grid model. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 217,834 |
1809.07744 | Guaranteed Globally Optimal Planar Pose Graph and Landmark SLAM via
Sparse-Bounded Sums-of-Squares Programming | Autonomous navigation requires an accurate model or map of the environment. While dramatic progress in the prior two decades has enabled large-scale SLAM, the majority of existing methods rely on non-linear optimization techniques to find the MLE of the robot trajectory and surrounding environment. These methods are prone to local minima and are thus sensitive to initialization. Several recent papers have developed optimization algorithms for the Pose-Graph SLAM problem that can certify the optimality of a computed solution. Though this does not guarantee a priori that this approach generates an optimal solution, a recent extension has shown that when the noise lies within a critical threshold that the solution to the optimization algorithm is guaranteed to be optimal. To address the limitations of existing approaches, this paper illustrates that the Pose-Graph SLAM and Landmark SLAM can be formulated as polynomial optimization programs that are SOS convex. This paper then describes how the Pose-Graph and Landmark SLAM problems can be solved to a global minimum without initialization regardless of noise level using the Sparse-BSOS hierarchy. This paper also empirically illustrates that convergence happens at the second step in this hierarchy. In addition, this paper illustrates how this Sparse-BSOS hierarchy can be implemented in the complex domain and empirically shows that convergence happens also at the second step of this complex domain hierarchy. Finally, the superior performance of the proposed approach when compared to existing SLAM methods is illustrated on graphs with several hundred nodes. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 108,345 |
1801.03331 | Reasoning about Unforeseen Possibilities During Policy Learning | Methods for learning optimal policies in autonomous agents often assume that the way the domain is conceptualised---its possible states and actions and their causal structure---is known in advance and does not change during learning. This is an unrealistic assumption in many scenarios, because new evidence can reveal important information about what is possible, possibilities that the agent was not aware existed prior to learning. We present a model of an agent which both discovers and learns to exploit unforeseen possibilities using two sources of evidence: direct interaction with the world and communication with a domain expert. We use a combination of probabilistic and symbolic reasoning to estimate all components of the decision problem, including its set of random variables and their causal dependencies. Agent simulations show that the agent converges on optimal polices even when it starts out unaware of factors that are critical to behaving optimally. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 88,074 |
2102.02640 | Low Bit-Rate Wideband Speech Coding: A Deep Generative Model based
Approach | Traditional low bit-rate speech coding approach only handles narrowband speech at 8kHz, which limits further improvements in speech quality. Motivated by recent successful exploration of deep learning methods for image and speech compression, this paper presents a new approach through vector quantization (VQ) of mel-frequency cepstral coefficients (MFCCs) and using a deep generative model called WaveGlow to provide efficient and high-quality speech coding. The coding feature is sorely an 80-dimension MFCCs vector for 16kHz wideband speech, then speech coding at the bit-rate throughout 1000-2000 bit/s could be scalably implemented by applying different VQ schemes for MFCCs vector. This new deep generative network based codec works fast as the WaveGlow model abandons the sample-by-sample autoregressive mechanism. We evaluated this new approach over the multi-speaker TIMIT corpus, and experimental results demonstrate that it provides better speech quality compared with the state-of-the-art classic MELPe codec at lower bit-rate. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 218,474 |
2408.10575 | MUSE: Mamba is Efficient Multi-scale Learner for Text-video Retrieval | Text-Video Retrieval (TVR) aims to align and associate relevant video content with corresponding natural language queries. Most existing TVR methods are based on large-scale pre-trained vision-language models (e.g., CLIP). However, due to the inherent plain structure of CLIP, few TVR methods explore the multi-scale representations which offer richer contextual information for a more thorough understanding. To this end, we propose MUSE, a multi-scale mamba with linear computational complexity for efficient cross-resolution modeling. Specifically, the multi-scale representations are generated by applying a feature pyramid on the last single-scale feature map. Then, we employ the Mamba structure as an efficient multi-scale learner to jointly learn scale-wise representations. Furthermore, we conduct comprehensive studies to investigate different model structures and designs. Extensive results on three popular benchmarks have validated the superiority of MUSE. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 481,920 |
1710.02186 | Collaborative Platooning of Automated Vehicles Using Variable Time-Gaps | Connected automated vehicles (CAVs) could potentially be coordinated to safely attain the maximum traffic flow on roadways under dynamic traffic patterns, such as those engendered by the merger of two strings of vehicles due a lane drop. Strings of vehicles have to be shaped correctly in terms of the inter-vehicular time-gap and velocity to ensure that such operation is feasible. However, controllers that can achieve such traffic shaping over the multiple dimensions of target time-gap and velocity over a region of space are unknown. The objective of this work is to design such a controller, and to show that we can design candidate time-gap and velocity profiles such that it can stabilize the string of vehicles in attaining the target profiles. Our analysis is based on studying the system in the spacial rather than the time domain, which enables us to study stability as in terms of minimizing errors from the target profile and across vehicles as a function of location. Finally, we conduct numeral simulations in the context of shaping two platoons for merger, which we use to illustrate how to select time-gap and velocity profiles for maximizing flow and maintaining safety. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 82,124 |
2304.02497 | Hyper-parameter Tuning for Adversarially Robust Models | This work focuses on the problem of hyper-parameter tuning (HPT) for robust (i.e., adversarially trained) models, shedding light on the new challenges and opportunities arising during the HPT process for robust models. To this end, we conduct an extensive experimental study based on 3 popular deep models, in which we explore exhaustively 9 (discretized) HPs, 2 fidelity dimensions, and 2 attack bounds, for a total of 19208 configurations (corresponding to 50 thousand GPU hours). Through this study, we show that the complexity of the HPT problem is further exacerbated in adversarial settings due to the need to independently tune the HPs used during standard and adversarial training: succeeding in doing so (i.e., adopting different HP settings in both phases) can lead to a reduction of up to 80% and 43% of the error for clean and adversarial inputs, respectively. On the other hand, we also identify new opportunities to reduce the cost of HPT for robust models. Specifically, we propose to leverage cheap adversarial training methods to obtain inexpensive, yet highly correlated, estimations of the quality achievable using state-of-the-art methods. We show that, by exploiting this novel idea in conjunction with a recent multi-fidelity optimizer (taKG), the efficiency of the HPT process can be enhanced by up to 2.1x. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 356,455 |
2405.03699 | HCC Is All You Need: Alignment-The Sensible Kind Anyway-Is Just
Human-Centered Computing | This article argues that AI Alignment is a type of Human-Centered Computing. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 452,282 |
2310.12942 | On the Representational Capacity of Recurrent Neural Language Models | This work investigates the computational expressivity of language models (LMs) based on recurrent neural networks (RNNs). Siegelmann and Sontag (1992) famously showed that RNNs with rational weights and hidden states and unbounded computation time are Turing complete. However, LMs define weightings over strings in addition to just (unweighted) language membership and the analysis of the computational power of RNN LMs (RLMs) should reflect this. We extend the Turing completeness result to the probabilistic case, showing how a rationally weighted RLM with unbounded computation time can simulate any deterministic probabilistic Turing machine (PTM) with rationally weighted transitions. Since, in practice, RLMs work in real-time, processing a symbol at every time step, we treat the above result as an upper bound on the expressivity of RLMs. We also provide a lower bound by showing that under the restriction to real-time computation, such models can simulate deterministic real-time rational PTMs. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 401,210 |
1910.03620 | Receding Horizon Curiosity | Sample-efficient exploration is crucial not only for discovering rewarding experiences but also for adapting to environment changes in a task-agnostic fashion. A principled treatment of the problem of optimal input synthesis for system identification is provided within the framework of sequential Bayesian experimental design. In this paper, we present an effective trajectory-optimization-based approximate solution of this otherwise intractable problem that models optimal exploration in an unknown Markov decision process (MDP). By interleaving episodic exploration with Bayesian nonlinear system identification, our algorithm takes advantage of the inductive bias to explore in a directed manner, without assuming prior knowledge of the MDP. Empirical evaluations indicate a clear advantage of the proposed algorithm in terms of the rate of convergence and the final model fidelity when compared to intrinsic-motivation-based algorithms employing exploration bonuses such as prediction error and information gain. Moreover, our method maintains a computational advantage over a recent model-based active exploration (MAX) algorithm, by focusing on the information gain along trajectories instead of seeking a global exploration policy. A reference implementation of our algorithm and the conducted experiments is publicly available. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 148,537 |
2407.19156 | Robust Multimodal 3D Object Detection via Modality-Agnostic Decoding and
Proximity-based Modality Ensemble | Recent advancements in 3D object detection have benefited from multi-modal information from the multi-view cameras and LiDAR sensors. However, the inherent disparities between the modalities pose substantial challenges. We observe that existing multi-modal 3D object detection methods heavily rely on the LiDAR sensor, treating the camera as an auxiliary modality for augmenting semantic details. This often leads to not only underutilization of camera data but also significant performance degradation in scenarios where LiDAR data is unavailable. Additionally, existing fusion methods overlook the detrimental impact of sensor noise induced by environmental changes, on detection performance. In this paper, we propose MEFormer to address the LiDAR over-reliance problem by harnessing critical information for 3D object detection from every available modality while concurrently safeguarding against corrupted signals during the fusion process. Specifically, we introduce Modality Agnostic Decoding (MOAD) that extracts geometric and semantic features with a shared transformer decoder regardless of input modalities and provides promising improvement with a single modality as well as multi-modality. Additionally, our Proximity-based Modality Ensemble (PME) module adaptively utilizes the strengths of each modality depending on the environment while mitigating the effects of a noisy sensor. Our MEFormer achieves state-of-the-art performance of 73.9% NDS and 71.5% mAP in the nuScenes validation set. Extensive analyses validate that our MEFormer improves robustness against challenging conditions such as sensor malfunctions or environmental changes. The source code is available at https://github.com/hanchaa/MEFormer | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 476,665 |
2009.02023 | Chain-Net: Learning Deep Model for Modulation Classification Under
Synthetic Channel Impairment | Modulation classification, an intermediate process between signal detection and demodulation in a physical layer, is now attracting more interest to the cognitive radio field, wherein the performance is powered by artificial intelligence algorithms. However, most existing conventional approaches pose the obstacle of effectively learning weakly discriminative modulation patterns. This paper proposes a robust modulation classification method by taking advantage of deep learning to capture the meaningful information of modulation signal at multi-scale feature representations. To this end, a novel architecture of convolutional neural network, namely Chain-Net, is developed with various asymmetric kernels organized in two processing flows and associated via depth-wise concatenation and element-wise addition for optimizing feature utilization. The network is evaluated on a big dataset of 14 challenging modulation formats, including analog and high-order digital techniques. The simulation results demonstrate that Chain-Net robustly classifies the modulation of radio signals suffering from a synthetic channel deterioration and further performs better than other deep networks. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 194,444 |
2001.06804 | Learning Compositional Neural Information Fusion for Human Parsing | This work proposes to combine neural networks with the compositional hierarchy of human bodies for efficient and complete human parsing. We formulate the approach as a neural information fusion framework. Our model assembles the information from three inference processes over the hierarchy: direct inference (directly predicting each part of a human body using image information), bottom-up inference (assembling knowledge from constituent parts), and top-down inference (leveraging context from parent nodes). The bottom-up and top-down inferences explicitly model the compositional and decompositional relations in human bodies, respectively. In addition, the fusion of multi-source information is conditioned on the inputs, i.e., by estimating and considering the confidence of the sources. The whole model is end-to-end differentiable, explicitly modeling information flows and structures. Our approach is extensively evaluated on four popular datasets, outperforming the state-of-the-arts in all cases, with a fast processing speed of 23fps. Our code and results have been released to help ease future research in this direction. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 160,887 |
2302.07667 | CERiL: Continuous Event-based Reinforcement Learning | This paper explores the potential of event cameras to enable continuous time reinforcement learning. We formalise this problem where a continuous stream of unsynchronised observations is used to produce a corresponding stream of output actions for the environment. This lack of synchronisation enables greatly enhanced reactivity. We present a method to train on event streams derived from standard RL environments, thereby solving the proposed continuous time RL problem. The CERiL algorithm uses specialised network layers which operate directly on an event stream, rather than aggregating events into quantised image frames. We show the advantages of event streams over less-frequent RGB images. The proposed system outperforms networks typically used in RL, even succeeding at tasks which cannot be solved traditionally. We also demonstrate the value of our CERiL approach over a standard SNN baseline using event streams. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 345,796 |
2407.11553 | Learning Global and Local Features of Power Load Series Through
Transformer and 2D-CNN: An Image-based Multi-step Forecasting Approach
Incorporating Phase Space Reconstruction | As modern power systems continue to evolve, accurate power load forecasting remains a critical issue in energy management. The phase space reconstruction method can effectively retain the inner chaotic property of power load from a system dynamics perspective and thus is a promising knowledge-based preprocessing method for short-term forecasting. In order to fully utilize the capability of PSR method to model the non-stationary characteristics within power load, and to solve the problem of the difficulty in applying traditional PSR prediction methods to form a general multi-step forecasting scheme, this study proposes a novel multi-step forecasting approach by delicately integrating the PSR with neural networks to establish an end-to-end learning system. Firstly, the useful features in the phase trajectory are discussed in detail. Through mathematical derivation, the equivalent characterization of the PSR and another time series preprocessing method, patch segmentation, is demonstrated for the first time. Based on this knowledge, an image-based modeling perspective is introduced. Subsequently, a novel deep learning model, namely PSR-GALIEN, is designed, in which the Transformer Encoder and 2D-CNN are employed for the extraction of the global and local patterns in the image, and a MLP-based predictor is used for the efficient correlation modeling. Then, extensive experiments are conducted on five real-world benchmark datasets to verify the effectiveness of the PSR-GALIEN. The results show that, compared with six state-of-the-art deep learning models, the forecasting performance of PSR-GALIEN consistently surpasses these baselines, achieving superior accuracy in both intra-day and day-ahead forecasting scenarios. At the same time, the attributions of its forecasting results can be explained through the visualization-based method, which significantly increases the interpretability. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 473,514 |
2305.14992 | Reasoning with Language Model is Planning with World Model | Large language models (LLMs) have shown remarkable reasoning capabilities, especially when prompted to generate intermediate reasoning steps (e.g., Chain-of-Thought, CoT). However, LLMs can still struggle with problems that are easy for humans, such as generating action plans for executing tasks in a given environment, or performing complex math, logical, and commonsense reasoning. The deficiency stems from the key fact that LLMs lack an internal $\textit{world model}$ to predict the world $\textit{state}$ (e.g., environment status, intermediate variable values) and simulate long-term outcomes of actions. This prevents LLMs from performing deliberate planning akin to human brains, which involves exploring alternative reasoning paths, anticipating future states and rewards, and iteratively refining existing reasoning steps. To overcome the limitations, we propose a new LLM reasoning framework, $\underline{R}$easoning vi$\underline{a}$ $\underline{P}$lanning $\textbf{(RAP)}$. RAP repurposes the LLM as both a world model and a reasoning agent, and incorporates a principled planning algorithm (based on Monto Carlo Tree Search) for strategic exploration in the vast reasoning space. During reasoning, the LLM (as agent) incrementally builds a reasoning tree under the guidance of the LLM (as world model) and task-specific rewards, and obtains a high-reward reasoning path efficiently with a proper balance between exploration $\textit{vs.}$ exploitation. We apply RAP to a variety of challenging reasoning problems including plan generation, math reasoning, and logical inference. Empirical results on these tasks demonstrate the superiority of RAP over various strong baselines, including CoT and least-to-most prompting with self-consistency. RAP on LLAMA-33B surpasses CoT on GPT-4 with 33% relative improvement in a plan generation setting. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 367,378 |
2112.13833 | HOPE: A Task-Oriented and Human-Centric Evaluation Framework Using
Professional Post-Editing Towards More Effective MT Evaluation | Traditional automatic evaluation metrics for machine translation have been widely criticized by linguists due to their low accuracy, lack of transparency, focus on language mechanics rather than semantics, and low agreement with human quality evaluation. Human evaluations in the form of MQM-like scorecards have always been carried out in real industry setting by both clients and translation service providers (TSPs). However, traditional human translation quality evaluations are costly to perform and go into great linguistic detail, raise issues as to inter-rater reliability (IRR) and are not designed to measure quality of worse than premium quality translations. In this work, we introduce HOPE, a task-oriented and human-centric evaluation framework for machine translation output based on professional post-editing annotations. It contains only a limited number of commonly occurring error types, and use a scoring model with geometric progression of error penalty points (EPPs) reflecting error severity level to each translation unit. The initial experimental work carried out on English-Russian language pair MT outputs on marketing content type of text from highly technical domain reveals that our evaluation framework is quite effective in reflecting the MT output quality regarding both overall system-level performance and segment-level transparency, and it increases the IRR for error type interpretation. The approach has several key advantages, such as ability to measure and compare less than perfect MT output from different systems, ability to indicate human perception of quality, immediate estimation of the labor effort required to bring MT output to premium quality, low-cost and faster application, as well as higher IRR. Our experimental data is available at \url{https://github.com/lHan87/HOPE}. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 273,370 |
2112.00890 | Counterfactual Explanations via Latent Space Projection and
Interpolation | Counterfactual explanations represent the minimal change to a data sample that alters its predicted classification, typically from an unfavorable initial class to a desired target class. Counterfactuals help answer questions such as "what needs to change for this application to get accepted for a loan?". A number of recently proposed approaches to counterfactual generation give varying definitions of "plausible" counterfactuals and methods to generate them. However, many of these methods are computationally intensive and provide unconvincing explanations. Here we introduce SharpShooter, a method for binary classification that starts by creating a projected version of the input that classifies as the target class. Counterfactual candidates are then generated in latent space on the interpolation line between the input and its projection. We then demonstrate that our framework translates core characteristics of a sample to its counterfactual through the use of learned representations. Furthermore, we show that SharpShooter is competitive across common quality metrics on tabular and image datasets while being orders of magnitude faster than two comparable methods and excels at measures of realism, making it well-suited for high velocity machine learning applications which require timely explanations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 269,283 |
2410.11298 | Sorted Weight Sectioning for Energy-Efficient Unstructured Sparse DNNs
on Compute-in-Memory Crossbars | We introduce $\textit{sorted weight sectioning}$ (SWS): a weight allocation algorithm that places sorted deep neural network (DNN) weight sections on bit-sliced compute-in-memory (CIM) crossbars to reduce analog-to-digital converter (ADC) energy consumption. Data conversions are the most energy-intensive process in crossbar operation. SWS effectively reduces this cost leveraging (1) small weights and (2) zero weights (weight sparsity). DNN weights follow bell-shaped distributions, with most weights near zero. Using SWS, we only need low-order crossbar columns for sections with low-magnitude weights. This reduces the quantity and resolution of ADCs used, exponentially decreasing ADC energy costs without significantly degrading DNN accuracy. Unstructured sparsification further sharpens the weight distribution with small accuracy loss. However, it presents challenges in hardware tracking of zeros: we cannot switch zero rows to other layer weights in unsorted crossbars without index matching. SWS efficiently addresses unstructured sparse models using offline remapping of zeros into earlier sections, which reveals full sparsity potential and maximizes energy efficiency. Our method reduces ADC energy use by 89.5% on unstructured sparse BERT models. Overall, this paper introduces a novel algorithm to promote energy-efficient CIM crossbars for unstructured sparse DNN workloads. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 498,493 |
1309.7817 | Performance Analysis of Massive MIMO for Cell-Boundary Users | In this paper, we consider massive multiple-input multiple-output (MIMO) systems for both downlink and uplink scenarios, where three radio units (RUs) connected via one digital unit (DU) support multiple user equipments (UEs) at the cell-boundary through the same radio resource, i.e., the same time-frequency slot. For downlink transmitter options, the study considers zero-forcing (ZF) and maximum ratio transmission (MRT), while for uplink receiver options it considers ZF and maximum ratio combining (MRC). For the sum rate of each of these, we derive simple closed-form formulas. In the simple but practically relevant case where uniform power is allocated to all downlink data streams, we observe that, for the downlink, vector normalization is better for ZF while matrix normalization is better for MRT. For a given antenna and user configuration, we also derive analytically the signal-to-noise-ratio (SNR) level below which MRC should be used instead of ZF. Numerical simulations confirm our analytical results. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 27,424 |
2209.11880 | Real-Time Model Predictive Control for Industrial Manipulators with
Singularity-Tolerant Hierarchical Task Control | This paper proposes a real-time model predictive control (MPC) scheme to execute multiple tasks using robots over a finite-time horizon. In industrial robotic applications, we must carefully consider multiple constraints for avoiding joint position, velocity, and torque limits. In addition, singularity-free and smooth motions require executing tasks continuously and safely. Instead of formulating nonlinear MPC problems, we devise linear MPC problems using kinematic and dynamic models linearized along nominal trajectories produced by hierarchical controllers. These linear MPC problems are solvable via the use of Quadratic Programming; therefore, we significantly reduce the computation time of the proposed MPC framework so the resulting update frequency is higher than 1 kHz. Our proposed MPC framework is more efficient in reducing task tracking errors than a baseline based on operational space control (OSC). We validate our approach in numerical simulations and in real experiments using an industrial manipulator. More specifically, we deploy our method in two practical scenarios for robotic logistics: 1) controlling a robot carrying heavy payloads while accounting for torque limits, and 2) controlling the end-effector while avoiding singularities. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 319,328 |
2306.07996 | Point spread function modelling for astronomical telescopes: a review
focused on weak gravitational lensing studies | The accurate modelling of the Point Spread Function (PSF) is of paramount importance in astronomical observations, as it allows for the correction of distortions and blurring caused by the telescope and atmosphere. PSF modelling is crucial for accurately measuring celestial objects' properties. The last decades brought us a steady increase in the power and complexity of astronomical telescopes and instruments. Upcoming galaxy surveys like Euclid and LSST will observe an unprecedented amount and quality of data. Modelling the PSF for these new facilities and surveys requires novel modelling techniques that can cope with the ever-tightening error requirements. The purpose of this review is three-fold. First, we introduce the optical background required for a more physically-motivated PSF modelling and propose an observational model that can be reused for future developments. Second, we provide an overview of the different physical contributors of the PSF, including the optic- and detector-level contributors and the atmosphere. We expect that the overview will help better understand the modelled effects. Third, we discuss the different methods for PSF modelling from the parametric and non-parametric families for ground- and space-based telescopes, with their advantages and limitations. Validation methods for PSF models are then addressed, with several metrics related to weak lensing studies discussed in detail. Finally, we explore current challenges and future directions in PSF modelling for astronomical telescopes. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 373,242 |
1603.02729 | Revisiting Active Perception | Despite the recent successes in robotics, artificial intelligence and computer vision, a complete artificial agent necessarily must include active perception. A multitude of ideas and methods for how to accomplish this have already appeared in the past, their broader utility perhaps impeded by insufficient computational power or costly hardware. The history of these ideas, perhaps selective due to our perspectives, is presented with the goal of organizing the past literature and highlighting the seminal contributions. We argue that those contributions are as relevant today as they were decades ago and, with the state of modern computational tools, are poised to find new life in the robotic perception systems of the next decade. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 53,046 |
2109.03780 | Bayesian Over-The-Air Computation | As an important piece of the multi-tier computing architecture for future wireless networks, over-the-air computation (OAC) enables efficient function computation in multiple-access edge computing, where a fusion center aims to compute a function of the data distributed at edge devices. Existing OAC relies exclusively on the maximum likelihood (ML) estimation at the fusion center to recover the arithmetic sum of the transmitted signals from different devices. ML estimation, however, is much susceptible to noise. In particular, in the misaligned OAC where there are channel misalignments among received signals, ML estimation suffers from severe error propagation and noise enhancement. To address these challenges, this paper puts forth a Bayesian approach by letting each edge device transmit two pieces of statistical information to the fusion center such that Bayesian estimators can be devised to tackle the misalignments. Numerical and simulation results verify that, 1) For the aligned and synchronous OAC, our linear minimum mean squared error (LMMSE) estimator significantly outperforms the ML estimator. In the low signal-to-noise ratio (SNR) regime, the LMMSE estimator reduces the mean squared error (MSE) by at least 6 dB; in the high SNR regime, the LMMSE estimator lowers the error floor of MSE by 86.4%; 2) For the asynchronous OAC, our LMMSE and sum-product maximum a posteriori (SP-MAP) estimators are on an equal footing in terms of the MSE performance, and are significantly better than the ML estimator. Moreover, the SP-MAP estimator is computationally efficient, the complexity of which grows linearly with the packet length. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 254,164 |
2110.14322 | Node-wise Localization of Graph Neural Networks | Graph neural networks (GNNs) emerge as a powerful family of representation learning models on graphs. To derive node representations, they utilize a global model that recursively aggregates information from the neighboring nodes. However, different nodes reside at different parts of the graph in different local contexts, making their distributions vary across the graph. Ideally, how a node receives its neighborhood information should be a function of its local context, to diverge from the global GNN model shared by all nodes. To utilize node locality without overfitting, we propose a node-wise localization of GNNs by accounting for both global and local aspects of the graph. Globally, all nodes on the graph depend on an underlying global GNN to encode the general patterns across the graph; locally, each node is localized into a unique model as a function of the global model and its local context. Finally, we conduct extensive experiments on four benchmark graphs, and consistently obtain promising performance surpassing the state-of-the-art GNNs. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 263,498 |
1907.08494 | Performance evaluation of THz wireless systems under the joint impact of
misalignment fading and phase noise | In this paper, we investigate the joint impact of misalignment fading and local oscillator (LO) phase noise (PHN) in multi-carrier terahertz (THz) wireless systems. In more detail, after establishing a suitable system model that takes into account the particularities of the THz channel, as well as the transceivers characteristics, we present simulation results that quantify the joint impact of misalignment fading and PHN in terms of average signal-to-interference-plus-noise-ratio (SINR) and outage probability (OP). | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 139,118 |
1607.05746 | Bayesian Non-Exhaustive Classification A Case Study: Online Name
Disambiguation using Temporal Record Streams | The name entity disambiguation task aims to partition the records of multiple real-life persons so that each partition contains records pertaining to a unique person. Most of the existing solutions for this task operate in a batch mode, where all records to be disambiguated are initially available to the algorithm. However, more realistic settings require that the name disambiguation task be performed in an online fashion, in addition to, being able to identify records of new ambiguous entities having no preexisting records. In this work, we propose a Bayesian non-exhaustive classification framework for solving online name disambiguation task. Our proposed method uses a Dirichlet process prior with a Normal * Normal * Inverse Wishart data model which enables identification of new ambiguous entities who have no records in the training data. For online classification, we use one sweep Gibbs sampler which is very efficient and effective. As a case study we consider bibliographic data in a temporal stream format and disambiguate authors by partitioning their papers into homogeneous groups. Our experimental results demonstrate that the proposed method is better than existing methods for performing online name disambiguation task. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 58,791 |
1507.03867 | Rich Component Analysis | In many settings, we have multiple data sets (also called views) that capture different and overlapping aspects of the same phenomenon. We are often interested in finding patterns that are unique to one or to a subset of the views. For example, we might have one set of molecular observations and one set of physiological observations on the same group of individuals, and we want to quantify molecular patterns that are uncorrelated with physiology. Despite being a common problem, this is highly challenging when the correlations come from complex distributions. In this paper, we develop the general framework of Rich Component Analysis (RCA) to model settings where the observations from different views are driven by different sets of latent components, and each component can be a complex, high-dimensional distribution. We introduce algorithms based on cumulant extraction that provably learn each of the components without having to model the other components. We show how to integrate RCA with stochastic gradient descent into a meta-algorithm for learning general models, and demonstrate substantial improvement in accuracy on several synthetic and real datasets in both supervised and unsupervised tasks. Our method makes it possible to learn latent variable models when we don't have samples from the true model but only samples after complex perturbations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 45,113 |
1511.04143 | Deep Reinforcement Learning in Parameterized Action Space | Recent work has shown that deep neural networks are capable of approximating both value functions and policies in reinforcement learning domains featuring continuous state and action spaces. However, to the best of our knowledge no previous work has succeeded at using deep neural networks in structured (parameterized) continuous action spaces. To fill this gap, this paper focuses on learning within the domain of simulated RoboCup soccer, which features a small set of discrete action types, each of which is parameterized with continuous variables. The best learned agent can score goals more reliably than the 2012 RoboCup champion agent. As such, this paper represents a successful extension of deep reinforcement learning to the class of parameterized action space MDPs. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | true | true | false | false | 48,850 |
1905.12080 | Non-normal Recurrent Neural Network (nnRNN): learning long time
dependencies while improving expressivity with transient dynamics | A recent strategy to circumvent the exploding and vanishing gradient problem in RNNs, and to allow the stable propagation of signals over long time scales, is to constrain recurrent connectivity matrices to be orthogonal or unitary. This ensures eigenvalues with unit norm and thus stable dynamics and training. However this comes at the cost of reduced expressivity due to the limited variety of orthogonal transformations. We propose a novel connectivity structure based on the Schur decomposition and a splitting of the Schur form into normal and non-normal parts. This allows to parametrize matrices with unit-norm eigenspectra without orthogonality constraints on eigenbases. The resulting architecture ensures access to a larger space of spectrally constrained matrices, of which orthogonal matrices are a subset. This crucial difference retains the stability advantages and training speed of orthogonal RNNs while enhancing expressivity, especially on tasks that require computations over ongoing input sequences. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 132,639 |
2502.02689 | Multidimensional Swarm Flight Approach For Chasing Unauthorized UAVs
Leveraging Asynchronous Deep Learning | This paper introduces a novel unmanned aerial vehicles (UAV) chasing system designed to track and chase unauthorized UAVs, significantly enhancing their neutralization effectiveness. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 530,427 |
2501.02937 | 4D-CS: Exploiting Cluster Prior for 4D Spatio-Temporal LiDAR Semantic
Segmentation | Semantic segmentation of LiDAR points has significant value for autonomous driving and mobile robot systems. Most approaches explore spatio-temporal information of multi-scan to identify the semantic classes and motion states for each point. However, these methods often overlook the segmentation consistency in space and time, which may result in point clouds within the same object being predicted as different categories. To handle this issue, our core idea is to generate cluster labels across multiple frames that can reflect the complete spatial structure and temporal information of objects. These labels serve as explicit guidance for our dual-branch network, 4D-CS, which integrates point-based and cluster-based branches to enable more consistent segmentation. Specifically, in the point-based branch, we leverage historical knowledge to enrich the current feature through temporal fusion on multiple views. In the cluster-based branch, we propose a new strategy to produce cluster labels of foreground objects and apply them to gather point-wise information to derive cluster features. We then merge neighboring clusters across multiple scans to restore missing features due to occlusion. Finally, in the point-cluster fusion stage, we adaptively fuse the information from the two branches to optimize segmentation results. Extensive experiments confirm the effectiveness of the proposed method, and we achieve state-of-the-art results on the multi-scan semantic and moving object segmentation on SemanticKITTI and nuScenes datasets. The code will be available at https://github.com/NEU-REAL/4D-CS.git. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 522,699 |
1810.02320 | Computer vision-based framework for extracting geological lineaments
from optical remote sensing data | The extraction of geological lineaments from digital satellite data is a fundamental application in remote sensing. The location of geological lineaments such as faults and dykes are of interest for a range of applications, particularly because of their association with hydrothermal mineralization. Although a wide range of applications have utilized computer vision techniques, a standard workflow for application of these techniques to mineral exploration is lacking. We present a framework for extracting geological lineaments using computer vision techniques which is a combination of edge detection and line extraction algorithms for extracting geological lineaments using optical remote sensing data. It features ancillary computer vision techniques for reducing data dimensionality, removing noise and enhancing the expression of lineaments. We test the proposed framework on Landsat 8 data of a mineral-rich portion of the Gascoyne Province in Western Australia using different dimension reduction techniques and convolutional filters. To validate the results, the extracted lineaments are compared to our manual photointerpretation and geologically mapped structures by the Geological Survey of Western Australia (GSWA). The results show that the best correlation between our extracted geological lineaments and the GSWA geological lineament map is achieved by applying a minimum noise fraction transformation and a Laplacian filter. Application of a directional filter instead shows a stronger correlation with the output of our manual photointerpretation and known sites of hydrothermal mineralization. Hence, our framework using either filter can be used for mineral prospectivity mapping in other regions where faults are exposed and observable in optical remote sensing data. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 109,570 |
2307.01668 | Training Energy-Based Models with Diffusion Contrastive Divergences | Energy-Based Models (EBMs) have been widely used for generative modeling. Contrastive Divergence (CD), a prevailing training objective for EBMs, requires sampling from the EBM with Markov Chain Monte Carlo methods (MCMCs), which leads to an irreconcilable trade-off between the computational burden and the validity of the CD. Running MCMCs till convergence is computationally intensive. On the other hand, short-run MCMC brings in an extra non-negligible parameter gradient term that is difficult to handle. In this paper, we provide a general interpretation of CD, viewing it as a special instance of our proposed Diffusion Contrastive Divergence (DCD) family. By replacing the Langevin dynamic used in CD with other EBM-parameter-free diffusion processes, we propose a more efficient divergence. We show that the proposed DCDs are both more computationally efficient than the CD and are not limited to a non-negligible gradient term. We conduct intensive experiments, including both synthesis data modeling and high-dimensional image denoising and generation, to show the advantages of the proposed DCDs. On the synthetic data learning and image denoising experiments, our proposed DCD outperforms CD by a large margin. In image generation experiments, the proposed DCD is capable of training an energy-based model for generating the Celab-A $32\times 32$ dataset, which is comparable to existing EBMs. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 377,431 |
1912.13088 | Off-Policy Estimation of Long-Term Average Outcomes with Applications to
Mobile Health | Due to the recent advancements in wearables and sensing technology, health scientists are increasingly developing mobile health (mHealth) interventions. In mHealth interventions, mobile devices are used to deliver treatment to individuals as they go about their daily lives. These treatments are generally designed to impact a near time, proximal outcome such as stress or physical activity. The mHealth intervention policies, often called just-in-time adaptive interventions, are decision rules that map an individual's current state (e.g., individual's past behaviors as well as current observations of time, location, social activity, stress and urges to smoke) to a particular treatment at each of many time points. The vast majority of current mHealth interventions deploy expert-derived policies. In this paper, we provide an approach for conducting inference about the performance of one or more such policies using historical data collected under a possibly different policy. Our measure of performance is the average of proximal outcomes over a long time period should the particular mHealth policy be followed. We provide an estimator as well as confidence intervals. This work is motivated by HeartSteps, an mHealth physical activity intervention. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 159,002 |
2305.13300 | Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large
Language Models in Knowledge Conflicts | By providing external information to large language models (LLMs), tool augmentation (including retrieval augmentation) has emerged as a promising solution for addressing the limitations of LLMs' static parametric memory. However, how receptive are LLMs to such external evidence, especially when the evidence conflicts with their parametric memory? We present the first comprehensive and controlled investigation into the behavior of LLMs when encountering knowledge conflicts. We propose a systematic framework to elicit high-quality parametric memory from LLMs and construct the corresponding counter-memory, which enables us to conduct a series of controlled experiments. Our investigation reveals seemingly contradicting behaviors of LLMs. On the one hand, different from prior wisdom, we find that LLMs can be highly receptive to external evidence even when that conflicts with their parametric memory, given that the external evidence is coherent and convincing. On the other hand, LLMs also demonstrate a strong confirmation bias when the external evidence contains some information that is consistent with their parametric memory, despite being presented with conflicting evidence at the same time. These results pose important implications that are worth careful consideration for the further development and deployment of tool- and retrieval-augmented LLMs. Resources are available at https://github.com/OSU-NLP-Group/LLM-Knowledge-Conflict. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 366,447 |
2010.09367 | On Properties and Optimization of Information-theoretic Privacy Watchdog | We study the problem of privacy preservation in data sharing, where $S$ is a sensitive variable to be protected and $X$ is a non-sensitive useful variable correlated with $S$. Variable $X$ is randomized into variable $Y$, which will be shared or released according to $p_{Y|X}(y|x)$. We measure privacy leakage by \emph{information privacy} (also known as \emph{log-lift} in the literature), which guarantees mutual information privacy and differential privacy (DP). Let $\Xepsc \subseteq \X$ contain elements n the alphabet of $X$ for which the absolute value of log-lift (abs-log-lift for short) is greater than a desired threshold $\eps$. When elements $x\in \Xepsc$ are randomized into $y\in \Y$, we derive the best upper bound on the abs-log-lift across the resultant pairs $(s,y)$. We then prove that this bound is achievable via an \emph{$X$-invariant} randomization $p(y|x) = R(y)$ for $x,y\in\Xepsc$. However, the utility measured by the mutual information $I(X;Y)$ is severely damaged in imposing a strict upper bound $\eps$ on the abs-log-lift. To remedy this and inspired by the probabilistic ($\eps$, $\delta$)-DP, we propose a relaxed ($\eps$, $\delta$)-log-lift framework. To achieve this relaxation, we introduce a greedy algorithm which exempts some elements in $\Xepsc$ from randomization, as long as their abs-log-lift is bounded by $\eps$ with probability $1-\delta$. Numerical results demonstrate efficacy of this algorithm in achieving a better privacy-utility tradeoff. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 201,513 |
2011.12946 | Exploratory LQG Mean Field Games with Entropy Regularization | We study a general class of entropy-regularized multi-variate LQG mean field games (MFGs) in continuous time with $K$ distinct sub-population of agents. We extend the notion of actions to action distributions (exploratory actions), and explicitly derive the optimal action distributions for individual agents in the limiting MFG. We demonstrate that the optimal set of action distributions yields an $\epsilon$-Nash equilibrium for the finite-population entropy-regularized MFG. Furthermore, we compare the resulting solutions with those of classical LQG MFGs and establish the equivalence of their existence. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 208,320 |
2403.04558 | Reducing self-supervised learning complexity improves weakly-supervised
classification performance in computational pathology | Deep Learning models have been successfully utilized to extract clinically actionable insights from routinely available histology data. Generally, these models require annotations performed by clinicians, which are scarce and costly to generate. The emergence of self-supervised learning (SSL) methods remove this barrier, allowing for large-scale analyses on non-annotated data. However, recent SSL approaches apply increasingly expansive model architectures and larger datasets, causing the rapid escalation of data volumes, hardware prerequisites, and overall expenses, limiting access to these resources to few institutions. Therefore, we investigated the complexity of contrastive SSL in computational pathology in relation to classification performance with the utilization of consumer-grade hardware. Specifically, we analyzed the effects of adaptations in data volume, architecture, and algorithms on downstream classification tasks, emphasizing their impact on computational resources. We trained breast cancer foundation models on a large public patient cohort and validated them on various downstream classification tasks in a weakly supervised manner on two external public patient cohorts. Our experiments demonstrate that we can improve downstream classification performance whilst reducing SSL training duration by 90%. In summary, we propose a set of adaptations which enable the utilization of SSL in computational pathology in non-resource abundant environments. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 435,643 |
2405.01249 | Prompt engineering paradigms for medical applications: scoping review
and recommendations for better practices | Prompt engineering is crucial for harnessing the potential of large language models (LLMs), especially in the medical domain where specialized terminology and phrasing is used. However, the efficacy of prompt engineering in the medical domain remains to be explored. In this work, 114 recent studies (2022-2024) applying prompt engineering in medicine, covering prompt learning (PL), prompt tuning (PT), and prompt design (PD) are reviewed. PD is the most prevalent (78 articles). In 12 papers, PD, PL, and PT terms were used interchangeably. ChatGPT is the most commonly used LLM, with seven papers using it for processing sensitive clinical data. Chain-of-Thought emerges as the most common prompt engineering technique. While PL and PT articles typically provide a baseline for evaluating prompt-based approaches, 64% of PD studies lack non-prompt-related baselines. We provide tables and figures summarizing existing work, and reporting recommendations to guide future research contributions. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 451,286 |
2301.05336 | Multitask Weakly Supervised Learning for Origin Destination Travel Time
Estimation | Travel time estimation from GPS trips is of great importance to order duration, ridesharing, taxi dispatching, etc. However, the dense trajectory is not always available due to the limitation of data privacy and acquisition, while the origin destination (OD) type of data, such as NYC taxi data, NYC bike data, and Capital Bikeshare data, is more accessible. To address this issue, this paper starts to estimate the OD trips travel time combined with the road network. Subsequently, a Multitask Weakly Supervised Learning Framework for Travel Time Estimation (MWSL TTE) has been proposed to infer transition probability between roads segments, and the travel time on road segments and intersection simultaneously. Technically, given an OD pair, the transition probability intends to recover the most possible route. And then, the output of travel time is equal to the summation of all segments' and intersections' travel time in this route. A novel route recovery function has been proposed to iteratively maximize the current route's co occurrence probability, and minimize the discrepancy between routes' probability distribution and the inverse distribution of routes' estimation loss. Moreover, the expected log likelihood function based on a weakly supervised framework has been deployed in optimizing the travel time from road segments and intersections concurrently. We conduct experiments on a wide range of real world taxi datasets in Xi'an and Chengdu and demonstrate our method's effectiveness on route recovery and travel time estimation. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 340,323 |
2209.09631 | De-Identification of French Unstructured Clinical Notes for Machine
Learning Tasks | Unstructured textual data are at the heart of health systems: liaison letters between doctors, operating reports, coding of procedures according to the ICD-10 standard, etc. The details included in these documents make it possible to get to know the patient better, to better manage him or her, to better study the pathologies, to accurately remunerate the associated medical acts\ldots All this seems to be (at least partially) within reach of today by artificial intelligence techniques. However, for obvious reasons of privacy protection, the designers of these AIs do not have the legal right to access these documents as long as they contain identifying data. De-identifying these documents, i.e. detecting and deleting all identifying information present in them, is a legally necessary step for sharing this data between two complementary worlds. Over the last decade, several proposals have been made to de-identify documents, mainly in English. While the detection scores are often high, the substitution methods are often not very robust to attack. In French, very few methods are based on arbitrary detection and/or substitution rules. In this paper, we propose a new comprehensive de-identification method dedicated to French-language medical documents. Both the approach for the detection of identifying elements (based on deep learning) and their substitution (based on differential privacy) are based on the most proven existing approaches. The result is an approach that effectively protects the privacy of the patients at the heart of these medical documents. The whole approach has been evaluated on a French language medical dataset of a French public hospital and the results are very encouraging. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 318,581 |
2112.07794 | Review of Factor Graphs for Robust GNSS Applications | Factor graphs have recently emerged as an alternative solution method for GNSS positioning. In this article, we review how factor graphs are implemented in GNSS, some of their advantages over Kalman Filters, and their importance in making positioning solutions more robust to degraded measurements. We also talk about how factor graphs can be an important tool for the field radio-navigation community. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 271,586 |
1709.03947 | Constant Space Complexity Environment Representation for Vision-based
Navigation | This paper presents a preliminary conceptual investigation into an environment representation that has constant space complexity with respect to the camera image space. This type of representation allows the planning algorithms of a mobile agent to bypass what are often complex and noisy transformations between camera image space and Euclidean space. The approach is to compute per-pixel potential values directly from processed camera data, which results in a discrete potential field that has constant space complexity with respect to the image plane. This can enable planning and control algorithms, whose complexity often depends on the size of the environment representation, to be defined with constant run-time. This type of approach can be particularly useful for platforms with strict resource constraints, such as embedded and real-time systems. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 80,570 |
2409.17758 | Adapting Deep Variational Bayes Filter for Enhanced Confidence
Estimation in Finite Element Method Integrated Networks (FEMIN) | The Finite Element Method (FEM) is a widely used technique for simulating crash scenarios with high accuracy and reliability. To reduce the significant computational costs associated with FEM, the Finite Element Method Integrated Networks (FEMIN) framework integrates neural networks (NNs) with FEM solvers. However, this integration can introduce errors and deviations from full-FEM simulations, highlighting the need for an additional metric to assess prediction confidence, especially when no ground truth data is available. In this study, we adapt the Deep Variational Bayes Filter (DVBF) to the FEMIN framework, incorporating a probabilistic approach to provide qualitative insights into prediction confidence during FEMIN simulations. The adaptation involves using the learned transition model for a predictive decoding step, generating a preliminary force prediction. This predictive force is used alongside the displacement and the velocity data from the FEM solver as input for the encoder model. The decoder reconstructs the likelihood distribution based on the posterior. The mean force of this distribution is applied to the FEM solver, while the predicted standard deviation can be used for uncertainty estimation. Our findings demonstrate that the DVBF outperforms deterministic NN architectures in terms of accuracy. Furthermore, the standard deviation derived from the decoder serves as a valuable qualitative metric for assessing the confidence in FEMIN simulations. This approach enhances the robustness of FEMIN by providing a measure of reliability alongside the simulation results. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 491,969 |
2306.17829 | Federated Ensemble YOLOv5 -- A Better Generalized Object Detection
Algorithm | Federated learning (FL) has gained significant traction as a privacy-preserving algorithm, but the underlying resemblances of federated learning algorithms like Federated averaging (FedAvg) or Federated SGD (Fed SGD) to ensemble learning algorithms have not been fully explored. The purpose of this paper is to examine the application of FL to object detection as a method to enhance generalizability, and to compare its performance against a centralized training approach for an object detection algorithm. Specifically, we investigate the performance of a YOLOv5 model trained using FL across multiple clients and employ a random sampling strategy without replacement, so each client holds a portion of the same dataset used for centralized training. Our experimental results showcase the superior efficiency of the FL object detector's global model in generating accurate bounding boxes for unseen objects, with the test set being a mixture of objects from two distinct clients not represented in the training dataset. These findings suggest that FL can be viewed from an ensemble algorithm perspective, akin to a synergistic blend of Bagging and Boosting techniques. As a result, FL can be seen not only as a method to enhance privacy, but also as a method to enhance the performance of a machine learning model. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 376,823 |
2409.03189 | A note on the differential spectrum of the Ness-Helleseth function | Let $n\geqslant3$ be an odd integer and $u$ an element in the finite field $\gf_{3^n}$. The Ness-Helleseth function is the binomial $f_u(x)=ux^{d_1}+x^{d_2}$ over $\gf_{3^n}$, where $d_1=\frac{3^n-1}{2}-1$ and $d_2=3^n-2$. In 2007, Ness and Helleseth showed that $f_u$ is an APN function when $\chi(u+1)=\chi(u-1)=\chi(u)$, is differentially $3$-uniform when $\chi(u+1)=\chi(u-1)\neq\chi(u)$, and has differential uniformity at most 4 if $ \chi(u+1)\neq\chi(u-1)$ and $u\notin\gf_3$. Here $\chi(\cdot)$ denotes the quadratic character on $\gf_{3^n}$. Recently, Xia et al. determined the differential uniformity of $f_u$ for all $u$ and computed the differential spectrum of $f_u$ for $u$ satisfying $\chi(u+1)=\chi(u-1)$ or $u\in\gf_3$. The remaining problem is the differential spectrum of $f_u$ with $\chi(u+1)\neq\chi(u-1)$ and $u\notin\gf_3$. In this paper, we fill in the gap. By studying differential equations arising from the Ness-Helleseth function $f_u$ more carefully, we express the differential spectrum of $f_u$ for such $u$ in terms of two quadratic character sums. This complements the previous work of Xia et al. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 485,951 |
2402.00449 | Parallel Spiking Unit for Efficient Training of Spiking Neural Networks | Efficient parallel computing has become a pivotal element in advancing artificial intelligence. Yet, the deployment of Spiking Neural Networks (SNNs) in this domain is hampered by their inherent sequential computational dependency. This constraint arises from the need for each time step's processing to rely on the preceding step's outcomes, significantly impeding the adaptability of SNN models to massively parallel computing environments. Addressing this challenge, our paper introduces the innovative Parallel Spiking Unit (PSU) and its two derivatives, the Input-aware PSU (IPSU) and Reset-aware PSU (RPSU). These variants skillfully decouple the leaky integration and firing mechanisms in spiking neurons while probabilistically managing the reset process. By preserving the fundamental computational attributes of the spiking neuron model, our approach enables the concurrent computation of all membrane potential instances within the SNN, facilitating parallel spike output generation and substantially enhancing computational efficiency. Comprehensive testing across various datasets, including static and sequential images, Dynamic Vision Sensor (DVS) data, and speech datasets, demonstrates that the PSU and its variants not only significantly boost performance and simulation speed but also augment the energy efficiency of SNNs through enhanced sparsity in neural activity. These advancements underscore the potential of our method in revolutionizing SNN deployment for high-performance parallel computing applications. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 425,615 |
2410.05269 | Data Advisor: Dynamic Data Curation for Safety Alignment of Large
Language Models | Data is a crucial element in large language model (LLM) alignment. Recent studies have explored using LLMs for efficient data collection. However, LLM-generated data often suffers from quality issues, with underrepresented or absent aspects and low-quality datapoints. To address these problems, we propose Data Advisor, an enhanced LLM-based method for generating data that takes into account the characteristics of the desired dataset. Starting from a set of pre-defined principles in hand, Data Advisor monitors the status of the generated data, identifies weaknesses in the current dataset, and advises the next iteration of data generation accordingly. Data Advisor can be easily integrated into existing data generation methods to enhance data quality and coverage. Experiments on safety alignment of three representative LLMs (i.e., Mistral, Llama2, and Falcon) demonstrate the effectiveness of Data Advisor in enhancing model safety against various fine-grained safety issues without sacrificing model utility. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 495,644 |
1903.01620 | What to Expect of Classifiers? Reasoning about Logistic Regression with
Missing Features | While discriminative classifiers often yield strong predictive performance, missing feature values at prediction time can still be a challenge. Classifiers may not behave as expected under certain ways of substituting the missing values, since they inherently make assumptions about the data distribution they were trained on. In this paper, we propose a novel framework that classifies examples with missing features by computing the expected prediction with respect to a feature distribution. Moreover, we use geometric programming to learn a naive Bayes distribution that embeds a given logistic regression classifier and can efficiently take its expected predictions. Empirical evaluations show that our model achieves the same performance as the logistic regression with all features observed, and outperforms standard imputation techniques when features go missing during prediction time. Furthermore, we demonstrate that our method can be used to generate "sufficient explanations" of logistic regression classifications, by removing features that do not affect the classification. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 123,299 |
2404.13973 | DEQ-MCL: Discrete-Event Queue-based Monte-Carlo Localization | Spatial cognition in hippocampal formation is posited to play a crucial role in the development of self-localization techniques for robots. In this paper, we propose a self-localization approach, DEQ-MCL, based on the discrete event queue hypothesis associated with phase precession within the hippocampal formation. Our method effectively estimates the posterior distribution of states, encompassing both past, present, and future states that are organized as a queue. This approach enables the smoothing of the posterior distribution of past states using current observations and the weighting of the joint distribution by considering the feasibility of future states. Our findings indicate that the proposed method holds promise for augmenting self-localization performance in indoor environments. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 448,520 |
2403.01431 | Image2Sentence based Asymmetrical Zero-shot Composed Image Retrieval | The task of composed image retrieval (CIR) aims to retrieve images based on the query image and the text describing the users' intent. Existing methods have made great progress with the advanced large vision-language (VL) model in CIR task, however, they generally suffer from two main issues: lack of labeled triplets for model training and difficulty of deployment on resource-restricted environments when deploying the large vision-language model. To tackle the above problems, we propose Image2Sentence based Asymmetric zero-shot composed image retrieval (ISA), which takes advantage of the VL model and only relies on unlabeled images for composition learning. In the framework, we propose a new adaptive token learner that maps an image to a sentence in the word embedding space of VL model. The sentence adaptively captures discriminative visual information and is further integrated with the text modifier. An asymmetric structure is devised for flexible deployment, in which the lightweight model is adopted for the query side while the large VL model is deployed on the gallery side. The global contrastive distillation and the local alignment regularization are adopted for the alignment between the light model and the VL model for CIR task. Our experiments demonstrate that the proposed ISA could better cope with the real retrieval scenarios and further improve retrieval accuracy and efficiency. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 434,412 |
2107.01057 | Backward-Compatible Prediction Updates: A Probabilistic Approach | When machine learning systems meet real world applications, accuracy is only one of several requirements. In this paper, we assay a complementary perspective originating from the increasing availability of pre-trained and regularly improving state-of-the-art models. While new improved models develop at a fast pace, downstream tasks vary more slowly or stay constant. Assume that we have a large unlabelled data set for which we want to maintain accurate predictions. Whenever a new and presumably better ML models becomes available, we encounter two problems: (i) given a limited budget, which data points should be re-evaluated using the new model?; and (ii) if the new predictions differ from the current ones, should we update? Problem (i) is about compute cost, which matters for very large data sets and models. Problem (ii) is about maintaining consistency of the predictions, which can be highly relevant for downstream applications; our demand is to avoid negative flips, i.e., changing correct to incorrect predictions. In this paper, we formalize the Prediction Update Problem and present an efficient probabilistic approach as answer to the above questions. In extensive experiments on standard classification benchmark data sets, we show that our method outperforms alternative strategies along key metrics for backward-compatible prediction updates. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 244,361 |
2303.15433 | Anti-DreamBooth: Protecting users from personalized text-to-image
synthesis | Text-to-image diffusion models are nothing but a revolution, allowing anyone, even without design skills, to create realistic images from simple text inputs. With powerful personalization tools like DreamBooth, they can generate images of a specific person just by learning from his/her few reference images. However, when misused, such a powerful and convenient tool can produce fake news or disturbing content targeting any individual victim, posing a severe negative social impact. In this paper, we explore a defense system called Anti-DreamBooth against such malicious use of DreamBooth. The system aims to add subtle noise perturbation to each user's image before publishing in order to disrupt the generation quality of any DreamBooth model trained on these perturbed images. We investigate a wide range of algorithms for perturbation optimization and extensively evaluate them on two facial datasets over various text-to-image model versions. Despite the complicated formulation of DreamBooth and Diffusion-based text-to-image models, our methods effectively defend users from the malicious use of those models. Their effectiveness withstands even adverse conditions, such as model or prompt/term mismatching between training and testing. Our code will be available at https://github.com/VinAIResearch/Anti-DreamBooth.git. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 354,489 |
2403.10290 | Offline Goal-Conditioned Reinforcement Learning for Shape Control of
Deformable Linear Objects | Deformable objects present several challenges to the field of robotic manipulation. One of the tasks that best encapsulates the difficulties arising due to non-rigid behavior is shape control, which requires driving an object to a desired shape. While shape-servoing methods have been shown successful in contexts with approximately linear behavior, they can fail in tasks with more complex dynamics. We investigate an alternative approach, using offline RL to solve a planar shape control problem of a Deformable Linear Object (DLO). To evaluate the effect of material properties, two DLOs are tested namely a soft rope and an elastic cord. We frame this task as a goal-conditioned offline RL problem, and aim to learn to generalize to unseen goal shapes. Data collection and augmentation procedures are proposed to limit the amount of experimental data which needs to be collected with the real robot. We evaluate the amount of augmentation needed to achieve the best results, and test the effect of regularization through behavior cloning on the TD3+BC algorithm. Finally, we show that the proposed approach is able to outperform a shape-servoing baseline in a curvature inversion experiment. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 438,133 |
2310.20246 | Breaking Language Barriers in Multilingual Mathematical Reasoning:
Insights and Observations | Existing research predominantly focuses on developing powerful language learning models (LLMs) for mathematical reasoning within monolingual languages, with few explorations in preserving efficacy in a multilingual context. To bridge this gap, this paper pioneers exploring and training powerful Multilingual Math Reasoning (xMR) LLMs. Firstly, by utilizing translation, we construct the first multilingual math reasoning instruction dataset, MGSM8KInstruct, encompassing ten distinct languages, thus addressing the issue of training data scarcity in xMR tasks. Based on the collected dataset, we propose different training strategies to build powerful xMR LLMs, named MathOctopus, notably outperform conventional open-source LLMs and exhibit superiority over ChatGPT in few-shot scenarios. Notably, MathOctopus-13B reaches 47.6% accuracy which exceeds ChatGPT 46.3% on MGSM testset. Beyond remarkable results, we unearth several pivotal observations and insights from extensive experiments: (1) When extending the rejection sampling strategy to the multilingual context, it proves effective for model performances, albeit limited. (2) Employing parallel corpora for math Supervised Fine-Tuning (SFT) across multiple languages not only significantly enhances model performance multilingually but also elevates their monolingual performance. This indicates that crafting multilingual corpora can be regarded as a vital strategy for enhancing model performance in a specific language, especially in mathematical reasoning tasks. For instance, MathOctopus-7B improves its counterparts that trained on English from 42.2% to 50.8% on GSM8K testset. Codes are available at https://github.com/microsoft/MathOctopus. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 404,317 |
2304.06412 | Quantifying and Explaining Machine Learning Uncertainty in Predictive
Process Monitoring: An Operations Research Perspective | This paper introduces a comprehensive, multi-stage machine learning methodology that effectively integrates information systems and artificial intelligence to enhance decision-making processes within the domain of operations research. The proposed framework adeptly addresses common limitations of existing solutions, such as the neglect of data-driven estimation for vital production parameters, exclusive generation of point forecasts without considering model uncertainty, and lacking explanations regarding the sources of such uncertainty. Our approach employs Quantile Regression Forests for generating interval predictions, alongside both local and global variants of SHapley Additive Explanations for the examined predictive process monitoring problem. The practical applicability of the proposed methodology is substantiated through a real-world production planning case study, emphasizing the potential of prescriptive analytics in refining decision-making procedures. This paper accentuates the imperative of addressing these challenges to fully harness the extensive and rich data resources accessible for well-informed decision-making. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 357,970 |
1304.1086 | Integrating Probabilistic, Taxonomic and Causal Knowledge in Abductive
Diagnosis | We propose an abductive diagnosis theory that integrates probabilistic, causal and taxonomic knowledge. Probabilistic knowledge allows us to select the most likely explanation; causal knowledge allows us to make reasonable independence assumptions; taxonomic knowledge allows causation to be modeled at different levels of detail, and allows observations be described in different levels of precision. Unlike most other approaches where a causal explanation is a hypothesis that one or more causative events occurred, we define an explanation of a set of observations to be an occurrence of a chain of causation events. These causation events constitute a scenario where all the observations are true. We show that the probabilities of the scenarios can be computed from the conditional probabilities of the causation events. Abductive reasoning is inherently complex even if only modest expressive power is allowed. However, our abduction algorithm is exponential only in the number of observations to be explained, and is polynomial in the size of the knowledge base. This contrasts with many other abduction procedures that are exponential in the size of the knowledge base. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 23,439 |
2011.01046 | NEARL: Non-Explicit Action Reinforcement Learning for Robotic Control | Traditionally, reinforcement learning methods predict the next action based on the current state. However, in many situations, directly applying actions to control systems or robots is dangerous and may lead to unexpected behaviors because action is rather low-level. In this paper, we propose a novel hierarchical reinforcement learning framework without explicit action. Our meta policy tries to manipulate the next optimal state and actual action is produced by the inverse dynamics model. To stabilize the training process, we integrate adversarial learning and information bottleneck into our framework. Under our framework, widely available state-only demonstrations can be exploited effectively for imitation learning. Also, prior knowledge and constraints can be applied to meta policy. We test our algorithm in simulation tasks and its combination with imitation learning. The experimental results show the reliability and robustness of our algorithms. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 204,463 |
2402.06921 | Clustering Techniques Selection for a Hybrid Regression Model: A Case
Study Based on a Solar Thermal System | This work addresses the performance comparison between four clustering techniques with the objective of achieving strong hybrid models in supervised learning tasks. A real dataset from a bio-climatic house named Sotavento placed on experimental wind farm and located in Xermade (Lugo) in Galicia (Spain) has been collected. Authors have chosen the thermal solar generation system in order to study how works applying several cluster methods followed by a regression technique to predict the output temperature of the system. With the objective of defining the quality of each clustering method two possible solutions have been implemented. The first one is based on three unsupervised learning metrics (Silhouette, Calinski-Harabasz and Davies-Bouldin) while the second one, employs the most common error measurements for a regression algorithm such as Multi Layer Perceptron. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | 428,487 |
2308.05739 | Zero Grads: Learning Local Surrogate Losses for Non-Differentiable
Graphics | Gradient-based optimization is now ubiquitous across graphics, but unfortunately can not be applied to problems with undefined or zero gradients. To circumvent this issue, the loss function can be manually replaced by a ``surrogate'' that has similar minima but is differentiable. Our proposed framework, ZeroGrads, automates this process by learning a neural approximation of the objective function, which in turn can be used to differentiate through arbitrary black-box graphics pipelines. We train the surrogate on an actively smoothed version of the objective and encourage locality, focusing the surrogate's capacity on what matters at the current training episode. The fitting is performed online, alongside the parameter optimization, and self-supervised, without pre-computed data or pre-trained models. As sampling the objective is expensive (it requires a full rendering or simulator run), we devise an efficient sampling scheme that allows for tractable run-times and competitive performance at little overhead. We demonstrate optimizing diverse non-convex, non-differentiable black-box problems in graphics, such as visibility in rendering, discrete parameter spaces in procedural modelling or optimal control in physics-driven animation. In contrast to other derivative-free algorithms, our approach scales well to higher dimensions, which we demonstrate on problems with up to 35k interlinked variables. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 384,891 |
2211.10764 | Understanding the Bystander Effect on Toxic Twitter Conversations | In this study, we explore the power of group dynamics to shape the toxicity of Twitter conversations. First, we examine how the presence of others in a conversation can potentially diffuse Twitter users' responsibility to address a toxic direct reply. Second, we examine whether the toxicity of the first direct reply to a toxic tweet in conversations establishes the group norms for subsequent replies. By doing so, we outline how bystanders and the tone of initial responses to a toxic reply are explanatory factors which affect whether others feel uninhibited to post their own abusive or derogatory replies. We test this premise by analyzing a random sample of more than 156k tweets belonging to ~9k conversations. Central to this work is the social psychological research on the "bystander effect" documenting that the presence of bystanders has the power to alter the dynamics of a social situation. If the first direct reply reaffirms the divisive tone, other replies may follow suit. We find evidence of a bystander effect, with our results showing that an increased number of users participating in the conversation before receiving a toxic tweet is negatively associated with the number of Twitter users who responded to the toxic reply in a non-toxic way. We also find that the initial responses to toxic tweets within conversations is of great importance. Posting a toxic reply immediately after a toxic comment is negatively associated with users posting non-toxic replies and Twitter conversations becoming increasingly toxic. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 331,429 |
2407.02547 | Domain Generalizable Knowledge Tracing via Concept Aggregation and
Relation-Based Attention | Knowledge Tracing (KT) is a critical task in online education systems, aiming to monitor students' knowledge states throughout a learning period. Common KT approaches involve predicting the probability of a student correctly answering the next question based on their exercise history. However, these methods often suffer from performance degradation when faced with the scarcity of student interactions in new education systems. To address this, we leverage student interactions from existing education systems to mitigate performance degradation caused by limited training data. Nevertheless, these interactions exhibit significant differences since they are derived from different education systems. To address this issue, we propose a domain generalization approach for knowledge tracing, where existing education systems are considered source domains, and new education systems with limited data are considered target domains. Additionally, we design a domain-generalizable knowledge tracing framework (DGKT) that can be applied to any KT model. Specifically, we present a concept aggregation approach designed to reduce conceptual disparities within sequences of student interactions from diverse domains. To further mitigate domain discrepancies, we introduce a novel normalization module called Sequence Instance Normalization (SeqIN). Moreover, to fully leverage exercise information, we propose a new knowledge tracing model tailored for the domain generalization KT task, named Domain-Generalizable Relation-based Knowledge Tracing (DGRKT). Extensive experiments across five benchmark datasets demonstrate that the proposed method performs well despite limited training data. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 469,790 |
1312.6978 | Mod\`ele \`a processus latent et algorithme EM pour la r\'egression non
lin\'eaire | A non linear regression approach which consists of a specific regression model incorporating a latent process, allowing various polynomial regression models to be activated preferentially and smoothly, is introduced in this paper. The model parameters are estimated by maximum likelihood performed via a dedicated expecation-maximization (EM) algorithm. An experimental study using simulated and real data sets reveals good performances of the proposed approach. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 29,433 |
2203.14806 | Extraction of Visual Information to Predict Crowdfunding Success | Researchers have increasingly turned to crowdfunding platforms to gain insights into entrepreneurial activity and dynamics. While previous studies have explored various factors influencing crowdfunding success, such as technology, communication, and marketing strategies, the role of visual elements that can be automatically extracted from images has received less attention. This is surprising, considering that crowdfunding platforms emphasize the importance of attention-grabbing and high-resolution images, and previous research has shown that image characteristics can significantly impact product evaluations. Indeed, a comprehensive review of empirical articles (n = 202) that utilized Kickstarter data, focusing on the incorporation of visual information in their analyses. Our findings reveal that only 29.70% controlled for the number of images, and less than 12% considered any image details. In this manuscript, we review the literature on image processing and its relevance to the business domain, highlighting two types of visual variables: visual counts (number of pictures and number of videos) and image details. Building upon previous work that discussed the role of color, composition and figure-ground relationships, we introduce visual scene elements that have not yet been explored in crowdfunding, including the number of faces, the number of concepts depicted, and the ease of identifying those concepts. To demonstrate the predictive value of visual counts and image details, we analyze Kickstarter data. Our results highlight that visual count features are two of the top three predictors of success. Our results also show that simple image detail features such as color matter a lot, and our proposed measures of visual scene elements can also be useful. We supplement our article with R and Python codes that help authors extract image details (https://osf.io/ujnzp/). | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 288,136 |
2306.12343 | Quantum R\'enyi and $f$-divergences from integral representations | Smooth Csisz\'ar $f$-divergences can be expressed as integrals over so-called hockey stick divergences. This motivates a natural quantum generalization in terms of quantum Hockey stick divergences, which we explore here. Using this recipe, the Kullback-Leibler divergence generalises to the Umegaki relative entropy, in the integral form recently found by Frenkel. We find that the R\'enyi divergences defined via our new quantum $f$-divergences are not additive in general, but that their regularisations surprisingly yield the Petz R\'enyi divergence for $\alpha < 1$ and the sandwiched R\'enyi divergence for $\alpha > 1$, unifying these two important families of quantum R\'enyi divergences. Moreover, we find that the contraction coefficients for the new quantum $f$ divergences collapse for all $f$ that are operator convex, mimicking the classical behaviour and resolving some long-standing conjectures by Lesniewski and Ruskai. We derive various inequalities, including new reverse Pinsker inequalities with applications in differential privacy and explore various other applications of the new divergences. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 374,909 |
2008.06266 | Optimized Deep Encoder-Decoder Methods for Crack Segmentation | Surface crack segmentation poses a challenging computer vision task as background, shape, colour and size of cracks vary. In this work we propose optimized deep encoder-decoder methods consisting of a combination of techniques which yield an increase in crack segmentation performance. Specifically we propose a decoder-part for an encoder-decoder based deep learning architecture for semantic segmentation and study its components to achieve increased performance. We also examine the use of different encoder strategies and introduce a data augmentation policy to increase the amount of available training data. The performance evaluation of our method is carried out on four publicly available crack segmentation datasets. Additionally, we introduce two techniques into the field of surface crack segmentation, previously not used there: Generating results using test-time-augmentation and performing a statistical result analysis over multiple training runs. The former approach generally yields increased performance results, whereas the latter allows for more reproducible and better representability of a methods results. Using those aforementioned strategies with our proposed encoder-decoder architecture we are able to achieve new state of the art results in all datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 191,755 |
1507.05681 | A Tractable Analysis of the Improvement in Unique Localizability Through
Collaboration | In this paper, we mathematically characterize the improvement in device localizability achieved by allowing collaboration among devices. Depending on the detection sensitivity of the receivers in the devices, it is not unusual for a device to be localized to lack a sufficient number of detectable positioning signals from localized devices to determine its location without ambiguity (i.e., to be uniquely localizable). This occurrence is well-known to be a limiting factor in localization performance, especially in communications systems. In cellular positioning, for example, cellular network designers call this the hearability problem. We study the conditions required for unique localizability and use tools from stochastic geometry to derive accurate analytic expressions for the probabilities of meeting these conditions in the noncollaborative and collaborative cases. We consider the scenario without shadowing, the scenario with shadowing and universal frequency reuse, and, finally, the shadowing scenario with random frequency reuse. The results from the latter scenario, which apply particularly to cellular networks, reveal that collaboration between two devices separated by only a short distance yields drastic improvements in both devices' abilities to uniquely determine their positions. The results from this analysis are very promising and motivate delving further into techniques which enhance cellular positioning with small-scale collaborative ranging observations among nearby devices. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 45,310 |
2303.17158 | KD-DLGAN: Data Limited Image Generation via Knowledge Distillation | Generative Adversarial Networks (GANs) rely heavily on large-scale training data for training high-quality image generation models. With limited training data, the GAN discriminator often suffers from severe overfitting which directly leads to degraded generation especially in generation diversity. Inspired by the recent advances in knowledge distillation (KD), we propose KD-DLGAN, a knowledge-distillation based generation framework that introduces pre-trained vision-language models for training effective data-limited generation models. KD-DLGAN consists of two innovative designs. The first is aggregated generative KD that mitigates the discriminator overfitting by challenging the discriminator with harder learning tasks and distilling more generalizable knowledge from the pre-trained models. The second is correlated generative KD that improves the generation diversity by distilling and preserving the diverse image-text correlation within the pre-trained models. Extensive experiments over multiple benchmarks show that KD-DLGAN achieves superior image generation with limited training data. In addition, KD-DLGAN complements the state-of-the-art with consistent and substantial performance gains. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 355,125 |
2311.06311 | Game Theory Solutions in Sensor-Based Human Activity Recognition: A
Review | The Human Activity Recognition (HAR) tasks automatically identify human activities using the sensor data, which has numerous applications in healthcare, sports, security, and human-computer interaction. Despite significant advances in HAR, critical challenges still exist. Game theory has emerged as a promising solution to address these challenges in machine learning problems including HAR. However, there is a lack of research work on applying game theory solutions to the HAR problems. This review paper explores the potential of game theory as a solution for HAR tasks, and bridges the gap between game theory and HAR research work by suggesting novel game-theoretic approaches for HAR problems. The contributions of this work include exploring how game theory can improve the accuracy and robustness of HAR models, investigating how game-theoretic concepts can optimize recognition algorithms, and discussing the game-theoretic approaches against the existing HAR methods. The objective is to provide insights into the potential of game theory as a solution for sensor-based HAR, and contribute to develop a more accurate and efficient recognition system in the future research directions. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 406,895 |
2305.19713 | Red Teaming Language Model Detectors with Language Models | The prevalence and strong capability of large language models (LLMs) present significant safety and ethical risks if exploited by malicious users. To prevent the potentially deceptive usage of LLMs, recent works have proposed algorithms to detect LLM-generated text and protect LLMs. In this paper, we investigate the robustness and reliability of these LLM detectors under adversarial attacks. We study two types of attack strategies: 1) replacing certain words in an LLM's output with their synonyms given the context; 2) automatically searching for an instructional prompt to alter the writing style of the generation. In both strategies, we leverage an auxiliary LLM to generate the word replacements or the instructional prompt. Different from previous works, we consider a challenging setting where the auxiliary LLM can also be protected by a detector. Experiments reveal that our attacks effectively compromise the performance of all detectors in the study with plausible generations, underscoring the urgent need to improve the robustness of LLM-generated text detection systems. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 369,651 |
2108.08090 | CollaborER: A Self-supervised Entity Resolution Framework Using
Multi-features Collaboration | Entity Resolution (ER) aims to identify whether two tuples refer to the same real-world entity and is well-known to be labor-intensive. It is a prerequisite to anomaly detection, as comparing the attribute values of two matched tuples from two different datasets provides one effective way to detect anomalies. Existing ER approaches, due to insufficient feature discovery or error-prone inherent characteristics, are not able to achieve stable performance. In this paper, we present CollaborER, a self-supervised entity resolution framework via multi-features collaboration. It is capable of (i) obtaining reliable ER results with zero human annotations and (ii) discovering adequate tuples' features in a fault-tolerant manner. CollaborER consists of two phases, i.e., automatic label generation (ALG) and collaborative ER training (CERT). In the first phase, ALG is proposed to generate a set of positive tuple pairs and a set of negative tuple pairs. ALG guarantees the high quality of the generated tuples and hence ensures the training quality of the subsequent CERT. In the second phase, CERT is introduced to learn the matching signals by discovering graph features and sentence features of tuples collaboratively. Extensive experimental results over eight real-world ER benchmarks show that CollaborER outperforms all the existing unsupervised ER approaches and is comparable or even superior to the state-of-the-art supervised ER methods. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 251,131 |
2003.07849 | Blur, Noise, and Compression Robust Generative Adversarial Networks | Generative adversarial networks (GANs) have gained considerable attention owing to their ability to reproduce images. However, they can recreate training images faithfully despite image degradation in the form of blur, noise, and compression, generating similarly degraded images. To solve this problem, the recently proposed noise robust GAN (NR-GAN) provides a partial solution by demonstrating the ability to learn a clean image generator directly from noisy images using a two-generator model comprising image and noise generators. However, its application is limited to noise, which is relatively easy to decompose owing to its additive and reversible characteristics, and its application to irreversible image degradation, in the form of blur, compression, and combination of all, remains a challenge. To address these problems, we propose blur, noise, and compression robust GAN (BNCR-GAN) that can learn a clean image generator directly from degraded images without knowledge of degradation parameters (e.g., blur kernel types, noise amounts, or quality factor values). Inspired by NR-GAN, BNCR-GAN uses a multiple-generator model composed of image, blur-kernel, noise, and quality-factor generators. However, in contrast to NR-GAN, to address irreversible characteristics, we introduce masking architectures adjusting degradation strength values in a data-driven manner using bypasses before and after degradation. Furthermore, to suppress uncertainty caused by the combination of blur, noise, and compression, we introduce adaptive consistency losses imposing consistency between irreversible degradation processes according to the degradation strengths. We demonstrate the effectiveness of BNCR-GAN through large-scale comparative studies on CIFAR-10 and a generality analysis on FFHQ. In addition, we demonstrate the applicability of BNCR-GAN in image restoration. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 168,561 |
2307.12661 | Algorithmic construction of Lyapunov functions for continuous vector
fields via convex semi-infinite programs | This article presents a novel numerically tractable technique for synthesizing Lyapunov functions for equilibria of nonlinear vector fields. In broad strokes, corresponding to an isolated equilibrium point of a given vector field, a selection is made of a compact neighborhood of the equilibrium and a dictionary of functions in which a Lyapunov function is expected to lie. Then an algorithmic procedure based on the recent work [DACC22] is deployed on the preceding neighborhood-dictionary pair and charged with the task of finding a function satisfying a compact family of inequalities that defines the behavior of a Lyapunov function on the selected neighborhood. The technique applies to continuous nonlinear vector fields without special algebraic structures and does not even require their analytical expressions to proceed. Several numerical examples are presented to illustrate our results. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 381,339 |
1801.09936 | PEYMA: A Tagged Corpus for Persian Named Entities | The goal in the NER task is to classify proper nouns of a text into classes such as person, location, and organization. This is an important preprocessing step in many NLP tasks such as question-answering and summarization. Although many research studies have been conducted in this area in English and the state-of-the-art NER systems have reached performances of higher than 90 percent in terms of F1 measure, there are very few research studies for this task in Persian. One of the main important causes of this may be the lack of a standard Persian NER dataset to train and test NER systems. In this research we create a standard, big-enough tagged Persian NER dataset which will be distributed for free for research purposes. In order to construct such a standard dataset, we studied standard NER datasets which are constructed for English researches and found out that almost all of these datasets are constructed using news texts. So we collected documents from ten news websites. Later, in order to provide annotators with some guidelines to tag these documents, after studying guidelines used for constructing CoNLL and MUC standard English datasets, we set our own guidelines considering the Persian linguistic rules. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 89,199 |
1806.06545 | A Simple Reservoir Model of Working Memory with Real Values | The prefrontal cortex is known to be involved in many high-level cognitive functions, in particular, working memory. Here, we study to what extent a group of randomly connected units (namely an Echo State Network, ESN) can store and maintain (as output) an arbitrary real value from a streamed input, i.e. can act as a sustained working memory unit. Furthermore, we explore to what extent such an architecture can take advantage of the stored value in order to produce non-linear computations. Comparison between different architectures (with and without feedback, with and without a working memory unit) shows that an explicit memory improves the performances. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 100,726 |
1912.10204 | A Machine Learning Framework for Authorship Identification From Texts | Authorship identification is a process in which the author of a text is identified. Most known literary texts can easily be attributed to a certain author because they are, for example, signed. Yet sometimes we find unfinished pieces of work or a whole bunch of manuscripts with a wide variety of possible authors. In order to assess the importance of such a manuscript, it is vital to know who wrote it. In this work, we aim to develop a machine learning framework to effectively determine authorship. We formulate the task as a single-label multi-class text categorization problem and propose a supervised machine learning framework incorporating stylometric features. This task is highly interdisciplinary in that it takes advantage of machine learning, information retrieval, and natural language processing. We present an approach and a model which learns the differences in writing style between $50$ different authors and is able to predict the author of a new text with high accuracy. The accuracy is seen to increase significantly after introducing certain linguistic stylometric features along with text features. | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | 158,264 |
2305.02693 | Semi-supervised Domain Adaptation via Prototype-based Multi-level
Learning | In semi-supervised domain adaptation (SSDA), a few labeled target samples of each class help the model to transfer knowledge representation from the fully labeled source domain to the target domain. Many existing methods ignore the benefits of making full use of the labeled target samples from multi-level. To make better use of this additional data, we propose a novel Prototype-based Multi-level Learning (ProML) framework to better tap the potential of labeled target samples. To achieve intra-domain adaptation, we first introduce a pseudo-label aggregation based on the intra-domain optimal transport to help the model align the feature distribution of unlabeled target samples and the prototype. At the inter-domain level, we propose a cross-domain alignment loss to help the model use the target prototype for cross-domain knowledge transfer. We further propose a dual consistency based on prototype similarity and linear classifier to promote discriminative learning of compact target feature representation at the batch level. Extensive experiments on three datasets, including DomainNet, VisDA2017, and Office-Home demonstrate that our proposed method achieves state-of-the-art performance in SSDA. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 362,136 |
2207.06741 | Differentiable Logics for Neural Network Training and Verification | The rising popularity of neural networks (NNs) in recent years and their increasing prevalence in real-world applications have drawn attention to the importance of their verification. While verification is known to be computationally difficult theoretically, many techniques have been proposed for solving it in practice. It has been observed in the literature that by default neural networks rarely satisfy logical constraints that we want to verify. A good course of action is to train the given NN to satisfy said constraint prior to verifying them. This idea is sometimes referred to as continuous verification, referring to the loop between training and verification. Usually training with constraints is implemented by specifying a translation for a given formal logic language into loss functions. These loss functions are then used to train neural networks. Because for training purposes these functions need to be differentiable, these translations are called differentiable logics (DL). This raises several research questions. What kind of differentiable logics are possible? What difference does a specific choice of DL make in the context of continuous verification? What are the desirable criteria for a DL viewed from the point of view of the resulting loss function? In this extended abstract we will discuss and answer these questions. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 307,977 |
2203.07643 | Can Synthetic Translations Improve Bitext Quality? | Synthetic translations have been used for a wide range of NLP tasks primarily as a means of data augmentation. This work explores, instead, how synthetic translations can be used to revise potentially imperfect reference translations in mined bitext. We find that synthetic samples can improve bitext quality without any additional bilingual supervision when they replace the originals based on a semantic equivalence classifier that helps mitigate NMT noise. The improved quality of the revised bitext is confirmed intrinsically via human evaluation and extrinsically through bilingual induction and MT tasks. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 285,498 |
1603.03101 | Recursive Recurrent Nets with Attention Modeling for OCR in the Wild | We present recursive recurrent neural networks with attention modeling (R$^2$AM) for lexicon-free optical character recognition in natural scene images. The primary advantages of the proposed method are: (1) use of recursive convolutional neural networks (CNNs), which allow for parametrically efficient and effective image feature extraction; (2) an implicitly learned character-level language model, embodied in a recurrent neural network which avoids the need to use N-grams; and (3) the use of a soft-attention mechanism, allowing the model to selectively exploit image features in a coordinated way, and allowing for end-to-end training within a standard backpropagation framework. We validate our method with state-of-the-art performance on challenging benchmark datasets: Street View Text, IIIT5k, ICDAR and Synth90k. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 53,076 |
2010.11148 | FastEmit: Low-latency Streaming ASR with Sequence-level Emission
Regularization | Streaming automatic speech recognition (ASR) aims to emit each hypothesized word as quickly and accurately as possible. However, emitting fast without degrading quality, as measured by word error rate (WER), is highly challenging. Existing approaches including Early and Late Penalties and Constrained Alignments penalize emission delay by manipulating per-token or per-frame probability prediction in sequence transducer models. While being successful in reducing delay, these approaches suffer from significant accuracy regression and also require additional word alignment information from an existing model. In this work, we propose a sequence-level emission regularization method, named FastEmit, that applies latency regularization directly on per-sequence probability in training transducer models, and does not require any alignment. We demonstrate that FastEmit is more suitable to the sequence-level optimization of transducer models for streaming ASR by applying it on various end-to-end streaming ASR networks including RNN-Transducer, Transformer-Transducer, ConvNet-Transducer and Conformer-Transducer. We achieve 150-300 ms latency reduction with significantly better accuracy over previous techniques on a Voice Search test set. FastEmit also improves streaming ASR accuracy from 4.4%/8.9% to 3.1%/7.5% WER, meanwhile reduces 90th percentile latency from 210 ms to only 30 ms on LibriSpeech. | false | false | true | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 202,138 |
2006.11513 | Deep Learning based Radio Resource Management in NOMA Networks: User
Association, Subchannel and Power Allocation | With the rapid development of future wireless communication, the combination of NOMA technology and millimeter-wave(mmWave) technology has become a research hotspot. The application of NOMA in mmWave heterogeneous networks can meet the diverse needs of users in different applications and scenarios in future communications. In this paper, we propose a machine learning framework to deal with the user association, subchannel and power allocation problems in such a complex scenario. We focus on maximizing the energy efficiency (EE) of the system under the constraints of quality of service (QoS), interference limitation, and power limitation. Specifically, user association is solved through the Lagrange dual decomposition method, while semi-supervised learning and deep neural network (DNN) are used for the subchannel and power allocation, respectively. In particular, unlabeled samples are introduced to improve approximation and generalization ability for subchannel allocation. The simulation indicates that the proposed scheme can achieve higher EE with lower complexity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 183,268 |
2411.02149 | Improving Domain Generalization in Self-supervised Monocular Depth
Estimation via Stabilized Adversarial Training | Learning a self-supervised Monocular Depth Estimation (MDE) model with great generalization remains significantly challenging. Despite the success of adversarial augmentation in the supervised learning generalization, naively incorporating it into self-supervised MDE models potentially causes over-regularization, suffering from severe performance degradation. In this paper, we conduct qualitative analysis and illuminate the main causes: (i) inherent sensitivity in the UNet-alike depth network and (ii) dual optimization conflict caused by over-regularization. To tackle these issues, we propose a general adversarial training framework, named Stabilized Conflict-optimization Adversarial Training (SCAT), integrating adversarial data augmentation into self-supervised MDE methods to achieve a balance between stability and generalization. Specifically, we devise an effective scaling depth network that tunes the coefficients of long skip connection and effectively stabilizes the training process. Then, we propose a conflict gradient surgery strategy, which progressively integrates the adversarial gradient and optimizes the model toward a conflict-free direction. Extensive experiments on five benchmarks demonstrate that SCAT can achieve state-of-the-art performance and significantly improve the generalization capability of existing self-supervised MDE methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 505,376 |
2410.12971 | Self-Pluralising Culture Alignment for Large Language Models | As large language models (LLMs) become increasingly accessible in many countries, it is essential to align them to serve pluralistic human values across cultures. However, pluralistic culture alignment in LLMs remain an open problem. In this paper, we propose CultureSPA, a Self-Pluralising Culture Alignment framework that allows LLMs to simultaneously align to pluralistic cultures. The framework first generates questions on various culture topics, then yields LLM outputs in response to these generated questions under both culture-aware and culture-unaware settings. By comparing culture-aware/unaware outputs, we are able to detect and collect culture-related instances. These instances are employed to fine-tune LLMs to serve pluralistic cultures in either a culture-joint or culture-specific way. Extensive experiments demonstrate that CultureSPA significantly improves the alignment of LLMs to diverse cultures without compromising general abilities. And further improvements can be achieved if CultureSPA is combined with advanced prompt engineering techniques. Comparisons between culture-joint and culture-specific tuning strategies, along with variations in data quality and quantity, illustrate the robustness of our method. We also explore the mechanisms underlying CultureSPA and the relations between different cultures it reflects. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 499,302 |
2406.16976 | Efficient Evolutionary Search Over Chemical Space with Large Language
Models | Molecular discovery, when formulated as an optimization problem, presents significant computational challenges because optimization objectives can be non-differentiable. Evolutionary Algorithms (EAs), often used to optimize black-box objectives in molecular discovery, traverse chemical space by performing random mutations and crossovers, leading to a large number of expensive objective evaluations. In this work, we ameliorate this shortcoming by incorporating chemistry-aware Large Language Models (LLMs) into EAs. Namely, we redesign crossover and mutation operations in EAs using LLMs trained on large corpora of chemical information. We perform extensive empirical studies on both commercial and open-source models on multiple tasks involving property optimization, molecular rediscovery, and structure-based drug design, demonstrating that the joint usage of LLMs with EAs yields superior performance over all baseline models across single- and multi-objective settings. We demonstrate that our algorithm improves both the quality of the final solution and convergence speed, thereby reducing the number of required objective evaluations. Our code is available at http://github.com/zoom-wang112358/MOLLEO | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | false | false | 467,375 |
1804.04168 | Differentiable Learning of Quantum Circuit Born Machine | Quantum circuit Born machines are generative models which represent the probability distribution of classical dataset as quantum pure states. Computational complexity considerations of the quantum sampling problem suggest that the quantum circuits exhibit stronger expressibility compared to classical neural networks. One can efficiently draw samples from the quantum circuits via projective measurements on qubits. However, similar to the leading implicit generative models in deep learning, such as the generative adversarial networks, the quantum circuits cannot provide the likelihood of the generated samples, which poses a challenge to the training. We devise an efficient gradient-based learning algorithm for the quantum circuit Born machine by minimizing the kerneled maximum mean discrepancy loss. We simulated generative modeling of the Bars-and-Stripes dataset and Gaussian mixture distributions using deep quantum circuits. Our experiments show the importance of circuit depth and gradient-based optimization algorithm. The proposed learning algorithm is runnable on near-term quantum device and can exhibit quantum advantages for generative modeling. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 94,769 |
1811.12787 | A Tutorial for Weighted Bipolar Argumentation with Continuous Dynamical
Systems and the Java Library Attractor | Weighted bipolar argumentation frameworks allow modeling decision problems and online discussions by defining arguments and their relationships. The strength of arguments can be computed based on an initial weight and the strength of attacking and supporting arguments. While previous approaches assumed an acyclic argumentation graph and successively set arguments' strength based on the strength of their parents, recently continuous dynamical systems have been proposed as an alternative. Continuous models update arguments' strength simultaneously and continuously. While there are currently no analytical guarantees for convergence in general graphs, experiments show that continuous models can converge quickly in large cyclic graphs with thousands of arguments. Here, we focus on the high-level ideas of this approach and explain key results and applications. We also introduce Attractor, a Java library that can be used to solve weighted bipolar argumentation problems. Attractor contains implementations of several discrete and continuous models and numerical algorithms to compute solutions. It also provides base classes that can be used to implement, to evaluate and to compare continuous models easily. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 115,095 |
1305.6003 | Exploiting Self-Interference Suppression for Improved Spectrum
Awareness/Efficiency in Cognitive Radio Systems | Inspired by recent developments in full-duplex communications, we propose and study new modes of operation for cognitive radios with the goal of achieving improved primary user (PU) detection and/or secondary user (SU) throughput. Specifically, we consider an opportunistic PU/SU setting in which the SU is equipped with partial/complete self-interference suppression (SIS), enabling it to transmit and receive/sense at the same time. Following a brief sensing period, the SU can operate in either simultaneous transmit-and-sense (TS) mode or simultaneous transmit-and-receive (TR) mode. We analytically study the performance metrics for the two modes, namely the detection and false-alarm probabilities, the PU outage probability, and the SU throughput. From this analysis, we evaluate the sensing-throughput tradeoff for both modes. Our objective is to find the optimal sensing and transmission durations for the SU that maximize its throughput subject to a given outage probability. We also explore the spectrum awareness/efficiency tradeoff that arises from the two modes by determining an efficient adaptive strategy for the SU link. This strategy has a threshold structure, which depends on the PU traffic load. Our study considers both perfect and imperfect sensing as well as perfect/imperfect SIS. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 24,809 |
2407.10640 | Error Bounds for the Network Scale-Up Method | Epidemiologists and social scientists have used the Network Scale-Up Method (NSUM) for over thirty years to estimate the size of a hidden sub-population within a social network. This method involves querying a subset of network nodes about the number of their neighbours belonging to the hidden sub-population. In general, NSUM assumes that the social network topology and the hidden sub-population distribution are well-behaved; hence, the NSUM estimate is close to the actual value. However, bounds on NSUM estimation errors have not been analytically proven. This paper provides analytical bounds on the error incurred by the two most popular NSUM estimators. These bounds assume that the queried nodes accurately provide their degree and the number of neighbors belonging to the hidden population. Our key findings are twofold. First, we show that when an adversary designs the network and places the hidden sub-population, then the estimate can be a factor of $\Omega(\sqrt{n})$ off from the real value (in a network with $n$ nodes). Second, we also prove error bounds when the underlying network is randomly generated, showing that a small constant factor can be achieved with high probability using samples of logarithmic size $O(\log{n})$. We present improved analytical bounds for Erdos-Renyi and Scale-Free networks. Our theoretical analysis is supported by an extensive set of numerical experiments designed to determine the effect of the sample size on the accuracy of the estimates in both synthetic and real networks. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 473,061 |
2412.10392 | Computational Methods for Breast Cancer Molecular Profiling through
Routine Histopathology: A Review | Precision medicine has become a central focus in breast cancer management, advancing beyond conventional methods to deliver more precise and individualized therapies. Traditionally, histopathology images have been used primarily for diagnostic purposes; however, they are now recognized for their potential in molecular profiling, which provides deeper insights into cancer prognosis and treatment response. Recent advancements in artificial intelligence (AI) have enabled digital pathology to analyze histopathologic images for both targeted molecular and broader omic biomarkers, marking a pivotal step in personalized cancer care. These technologies offer the capability to extract various biomarkers such as genomic, transcriptomic, proteomic, and metabolomic markers directly from the routine hematoxylin and eosin (H&E) stained images, which can support treatment decisions without the need for costly molecular assays. In this work, we provide a comprehensive review of AI-driven techniques for biomarker detection, with a focus on diverse omic biomarkers that allow novel biomarker discovery. Additionally, we analyze the major challenges faced in this field for robust algorithm development. These challenges highlight areas where further research is essential to bridge the gap between AI research and clinical application. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 516,898 |
2411.09268 | LES-Talker: Fine-Grained Emotion Editing for Talking Head Generation in
Linear Emotion Space | While existing one-shot talking head generation models have achieved progress in coarse-grained emotion editing, there is still a lack of fine-grained emotion editing models with high interpretability. We argue that for an approach to be considered fine-grained, it needs to provide clear definitions and sufficiently detailed differentiation. We present LES-Talker, a novel one-shot talking head generation model with high interpretability, to achieve fine-grained emotion editing across emotion types, emotion levels, and facial units. We propose a Linear Emotion Space (LES) definition based on Facial Action Units to characterize emotion transformations as vector transformations. We design the Cross-Dimension Attention Net (CDAN) to deeply mine the correlation between LES representation and 3D model representation. Through mining multiple relationships across different feature and structure dimensions, we enable LES representation to guide the controllable deformation of 3D model. In order to adapt the multimodal data with deviations to the LES and enhance visual quality, we utilize specialized network design and training strategies. Experiments show that our method provides high visual quality along with multilevel and interpretable fine-grained emotion editing, outperforming mainstream methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 508,193 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.