id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2209.14792 | Make-A-Video: Text-to-Video Generation without Text-Video Data | We propose Make-A-Video -- an approach for directly translating the tremendous recent progress in Text-to-Image (T2I) generation to Text-to-Video (T2V). Our intuition is simple: learn what the world looks like and how it is described from paired text-image data, and learn how the world moves from unsupervised video footage. Make-A-Video has three advantages: (1) it accelerates training of the T2V model (it does not need to learn visual and multimodal representations from scratch), (2) it does not require paired text-video data, and (3) the generated videos inherit the vastness (diversity in aesthetic, fantastical depictions, etc.) of today's image generation models. We design a simple yet effective way to build on T2I models with novel and effective spatial-temporal modules. First, we decompose the full temporal U-Net and attention tensors and approximate them in space and time. Second, we design a spatial temporal pipeline to generate high resolution and frame rate videos with a video decoder, interpolation model and two super resolution models that can enable various applications besides T2V. In all aspects, spatial and temporal resolution, faithfulness to text, and quality, Make-A-Video sets the new state-of-the-art in text-to-video generation, as determined by both qualitative and quantitative measures. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 320,355 |
2311.18116 | Multi-round Dynamic Group Decision Making Method On 2-Dimension
Uncertain Linguistic Variables | The language evaluation information of the interactive group decision method at present is based on the one-dimension language variable. At the same time, multi-attribute group decision making method based on two-dimension linguistic information only use single-stage and static evaluation method. In this paper, we propose a dynamic group decision making method based on two-dimension linguistic information, combining dynamic interactive group decision making methods with two-dimensional language evaluation information The method first use Two-Dimensional Uncertain Linguistic Generalized Weighted Aggregation (DULGWA) Operators to aggregate the preference information of each decision maker, then adopting dynamic information entropy method to obtain weights of attributes at each stage. Finally we propose the group consistency index to quantify the termination conditions of group interaction. One example is given to verify the developed approach and to demonstrate its effectiveness. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 411,563 |
2308.02137 | Learning the solution operator of two-dimensional incompressible
Navier-Stokes equations using physics-aware convolutional neural networks | In recent years, the concept of introducing physics to machine learning has become widely popular. Most physics-inclusive ML-techniques however are still limited to a single geometry or a set of parametrizable geometries. Thus, there remains the need to train a new model for a new geometry, even if it is only slightly modified. With this work we introduce a technique with which it is possible to learn approximate solutions to the steady-state Navier--Stokes equations in varying geometries without the need of parametrization. This technique is based on a combination of a U-Net-like CNN and well established discretization methods from the field of the finite difference method.The results of our physics-aware CNN are compared to a state-of-the-art data-based approach. Additionally, it is also shown how our approach performs when combined with the data-based approach. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 383,506 |
1803.10123 | Task Agnostic Continual Learning Using Online Variational Bayes | Catastrophic forgetting is the notorious vulnerability of neural networks to the change of the data distribution while learning. This phenomenon has long been considered a major obstacle for allowing the use of learning agents in realistic continual learning settings. A large body of continual learning research assumes that task boundaries are known during training. However, research for scenarios in which task boundaries are unknown during training has been lacking. In this paper we present, for the first time, a method for preventing catastrophic forgetting (BGD) for scenarios with task boundaries that are unknown during training --- task-agnostic continual learning. Code of our algorithm is available at https://github.com/igolan/bgd. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 93,647 |
2404.17452 | A Continuous Relaxation for Discrete Bayesian Optimization | To optimize efficiently over discrete data and with only few available target observations is a challenge in Bayesian optimization. We propose a continuous relaxation of the objective function and show that inference and optimization can be computationally tractable. We consider in particular the optimization domain where very few observations and strict budgets exist; motivated by optimizing protein sequences for expensive to evaluate bio-chemical properties. The advantages of our approach are two-fold: the problem is treated in the continuous setting, and available prior knowledge over sequences can be incorporated directly. More specifically, we utilize available and learned distributions over the problem domain for a weighting of the Hellinger distance which yields a covariance function. We show that the resulting acquisition function can be optimized with both continuous or discrete optimization algorithms and empirically assess our method on two bio-chemical sequence optimization tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 449,858 |
1705.11001 | Adversarial Ranking for Language Generation | Generative adversarial networks (GANs) have great successes on synthesizing data. However, the existing GANs restrict the discriminator to be a binary classifier, and thus limit their learning capacity for tasks that need to synthesize output with rich structures such as natural language descriptions. In this paper, we propose a novel generative adversarial network, RankGAN, for generating high-quality language descriptions. Rather than training the discriminator to learn and assign absolute binary predicate for individual data sample, the proposed RankGAN is able to analyze and rank a collection of human-written and machine-written sentences by giving a reference group. By viewing a set of data samples collectively and evaluating their quality through relative ranking scores, the discriminator is able to make better assessment which in turn helps to learn a better generator. The proposed RankGAN is optimized through the policy gradient technique. Experimental results on multiple public datasets clearly demonstrate the effectiveness of the proposed approach. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 74,516 |
2306.13976 | Channel Estimation for RIS aided MISO System | This paper considers the channel estimation of a single user in a MISO system with an intelligent reflecting surface (IRS). The performances of the minimum variance unbiased (MVU) and minimum mean square error (MMSE) estimators using the discrete Fourier transform activation pattern for the IRS, updated at every symbol interval, are compared. Numerical results show that the MMSE estimator provides over 10 dB SNR improvement compared to the MVU estimator. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 375,482 |
2302.14771 | Generic-to-Specific Distillation of Masked Autoencoders | Large vision Transformers (ViTs) driven by self-supervised pre-training mechanisms achieved unprecedented progress. Lightweight ViT models limited by the model capacity, however, benefit little from those pre-training mechanisms. Knowledge distillation defines a paradigm to transfer representations from large (teacher) models to small (student) ones. However, the conventional single-stage distillation easily gets stuck on task-specific transfer, failing to retain the task-agnostic knowledge crucial for model generalization. In this study, we propose generic-to-specific distillation (G2SD), to tap the potential of small ViT models under the supervision of large models pre-trained by masked autoencoders. In generic distillation, decoder of the small model is encouraged to align feature predictions with hidden representations of the large model, so that task-agnostic knowledge can be transferred. In specific distillation, predictions of the small model are constrained to be consistent with those of the large model, to transfer task-specific features which guarantee task performance. With G2SD, the vanilla ViT-Small model respectively achieves 98.7%, 98.1% and 99.3% the performance of its teacher (ViT-Base) for image classification, object detection, and semantic segmentation, setting a solid baseline for two-stage vision distillation. Code will be available at https://github.com/pengzhiliang/G2SD. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 348,432 |
2402.01135 | A Multi-Agent Conversational Recommender System | Due to strong capabilities in conducting fluent, multi-turn conversations with users, Large Language Models (LLMs) have the potential to further improve the performance of Conversational Recommender System (CRS). Unlike the aimless chit-chat that LLM excels at, CRS has a clear target. So it is imperative to control the dialogue flow in the LLM to successfully recommend appropriate items to the users. Furthermore, user feedback in CRS can assist the system in better modeling user preferences, which has been ignored by existing studies. However, simply prompting LLM to conduct conversational recommendation cannot address the above two key challenges. In this paper, we propose Multi-Agent Conversational Recommender System (MACRS) which contains two essential modules. First, we design a multi-agent act planning framework, which can control the dialogue flow based on four LLM-based agents. This cooperative multi-agent framework will generate various candidate responses based on different dialogue acts and then choose the most appropriate response as the system response, which can help MACRS plan suitable dialogue acts. Second, we propose a user feedback-aware reflection mechanism which leverages user feedback to reason errors made in previous turns to adjust the dialogue act planning, and higher-level user information from implicit semantics. We conduct extensive experiments based on user simulator to demonstrate the effectiveness of MACRS in recommendation and user preferences collection. Experimental results illustrate that MACRS demonstrates an improvement in user interaction experience compared to directly using LLMs. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 425,876 |
2308.00176 | A Flow Artist for High-Dimensional Cellular Data | We consider the problem of embedding point cloud data sampled from an underlying manifold with an associated flow or velocity. Such data arises in many contexts where static snapshots of dynamic entities are measured, including in high-throughput biology such as single-cell transcriptomics. Existing embedding techniques either do not utilize velocity information or embed the coordinates and velocities independently, i.e., they either impose velocities on top of an existing point embedding or embed points within a prescribed vector field. Here we present FlowArtist, a neural network that embeds points while jointly learning a vector field around the points. The combination allows FlowArtist to better separate and visualize velocity-informed structures. Our results, on toy datasets and single-cell RNA velocity data, illustrate the value of utilizing coordinate and velocity information in tandem for embedding and visualizing high-dimensional data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 382,841 |
2412.16526 | Text2midi: Generating Symbolic Music from Captions | This paper introduces text2midi, an end-to-end model to generate MIDI files from textual descriptions. Leveraging the growing popularity of multimodal generative approaches, text2midi capitalizes on the extensive availability of textual data and the success of large language models (LLMs). Our end-to-end system harnesses the power of LLMs to generate symbolic music in the form of MIDI files. Specifically, we utilize a pretrained LLM encoder to process captions, which then condition an autoregressive transformer decoder to produce MIDI sequences that accurately reflect the provided descriptions. This intuitive and user-friendly method significantly streamlines the music creation process by allowing users to generate music pieces using text prompts. We conduct comprehensive empirical evaluations, incorporating both automated and human studies, that show our model generates MIDI files of high quality that are indeed controllable by text captions that may include music theory terms such as chords, keys, and tempo. We release the code and music samples on our demo page (https://github.com/AMAAI-Lab/Text2midi) for users to interact with text2midi. | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 519,569 |
2005.05629 | Max-Consensus Over Fading Wireless Channels | The topic of this paper is achieving finite-time max-consensus in a multi-agent system that communicates over a fading wireless channel and exploits its interference property. This phenomenon corrupts the desired information when data is transmitted synchronously. In fact, each transmitted signal is attenuated by an unknown and time-varying factor (fading coefficient), then, by interference, all such attenuated signals are summed up at a receiver. Rather than combatting interference, we design a communication system that exploits it. Our strategy yields a more efficient usage of wireless resources compared to other algorithms. By simultaneously accessing this communication system, each agent obtains a weighted average of the neighbouring agents' information states. With this piece of information at hand and with a switching consensus protocol employing broadcast authorisations for agents, max-consensus can be achieved within a finite number of iterations. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 176,787 |
1611.06440 | Pruning Convolutional Neural Networks for Resource Efficient Inference | We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation - a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to fine-grained classification tasks (Birds-200 and Flowers-102) relaying only on the first order gradient information. We also show that pruning can lead to more than 10x theoretical (5x practical) reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 64,188 |
1812.04528 | Deep Neural Networks for Choice Analysis: Extracting Complete Economic
Information for Interpretation | While deep neural networks (DNNs) have been increasingly applied to choice analysis showing high predictive power, it is unclear to what extent researchers can interpret economic information from DNNs. This paper demonstrates that DNNs can provide economic information as complete as classical discrete choice models (DCMs). The economic information includes choice predictions, choice probabilities, market shares, substitution patterns of alternatives, social welfare, probability derivatives, elasticities, marginal rates of substitution (MRS), and heterogeneous values of time (VOT). Unlike DCMs, DNNs can automatically learn the utility function and reveal behavioral patterns that are not prespecified by domain experts. However, the economic information obtained from DNNs can be unreliable because of the three challenges associated with the automatic learning capacity: high sensitivity to hyperparameters, model non-identification, and local irregularity. To demonstrate the strength and challenges of DNNs, we estimated the DNNs using a stated preference survey, extracted the full list of economic information from the DNNs, and compared them with those from the DCMs. We found that the economic information either aggregated over trainings or population is more reliable than the disaggregate information of the individual observations or trainings, and that even simple hyperparameter searching can significantly improve the reliability of the economic information extracted from the DNNs. Future studies should investigate other regularizations and DNN architectures, better optimization algorithms, and robust DNN training methods to address DNNs' three challenges, to provide more reliable economic information from DNN-based choice models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 116,233 |
2307.06648 | LimSim: A Long-term Interactive Multi-scenario Traffic Simulator | With the growing popularity of digital twin and autonomous driving in transportation, the demand for simulation systems capable of generating high-fidelity and reliable scenarios is increasing. Existing simulation systems suffer from a lack of support for different types of scenarios, and the vehicle models used in these systems are too simplistic. Thus, such systems fail to represent driving styles and multi-vehicle interactions, and struggle to handle corner cases in the dataset. In this paper, we propose LimSim, the Long-term Interactive Multi-scenario traffic Simulator, which aims to provide a long-term continuous simulation capability under the urban road network. LimSim can simulate fine-grained dynamic scenarios and focus on the diverse interactions between multiple vehicles in the traffic flow. This paper provides a detailed introduction to the framework and features of the LimSim, and demonstrates its performance through case studies and experiments. LimSim is now open source on GitHub: https://www.github.com/PJLab-ADG/LimSim . | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 379,138 |
2304.10201 | UAV-based Receding Horizon Control for 3D Inspection Planning | Nowadays, unmanned aerial vehicles or UAVs are being used for a wide range of tasks, including infrastructure inspection, automated monitoring and coverage. This paper investigates the problem of 3D inspection planning with an autonomous UAV agent which is subject to dynamical and sensing constraints. We propose a receding horizon 3D inspection planning control approach for generating optimal trajectories which enable an autonomous UAV agent to inspect a finite number of feature-points scattered on the surface of a cuboid-like structure of interest. The inspection planning problem is formulated as a constrained open-loop optimal control problem and is solved using mixed integer programming (MIP) optimization. Quantitative and qualitative evaluation demonstrates the effectiveness of the proposed approach. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 359,326 |
2410.20265 | You Never Know: Quantization Induces Inconsistent Biases in
Vision-Language Foundation Models | We study the impact of a standard practice in compressing foundation vision-language models - quantization - on the models' ability to produce socially-fair outputs. In contrast to prior findings with unimodal models that compression consistently amplifies social biases, our extensive evaluation of four quantization settings across three datasets and three CLIP variants yields a surprising result: while individual models demonstrate bias, we find no consistent change in bias magnitude or direction across a population of compressed models due to quantization. | false | false | false | false | false | false | true | false | false | false | false | true | false | true | false | false | false | false | 502,728 |
2203.01255 | Low-Degree Multicalibration | Introduced as a notion of algorithmic fairness, multicalibration has proved to be a powerful and versatile concept with implications far beyond its original intent. This stringent notion -- that predictions be well-calibrated across a rich class of intersecting subpopulations -- provides its strong guarantees at a cost: the computational and sample complexity of learning multicalibrated predictors are high, and grow exponentially with the number of class labels. In contrast, the relaxed notion of multiaccuracy can be achieved more efficiently, yet many of the most desirable properties of multicalibration cannot be guaranteed assuming multiaccuracy alone. This tension raises a key question: Can we learn predictors with multicalibration-style guarantees at a cost commensurate with multiaccuracy? In this work, we define and initiate the study of Low-Degree Multicalibration. Low-Degree Multicalibration defines a hierarchy of increasingly-powerful multi-group fairness notions that spans multiaccuracy and the original formulation of multicalibration at the extremes. Our main technical contribution demonstrates that key properties of multicalibration, related to fairness and accuracy, actually manifest as low-degree properties. Importantly, we show that low-degree multicalibration can be significantly more efficient than full multicalibration. In the multi-class setting, the sample complexity to achieve low-degree multicalibration improves exponentially (in the number of classes) over full multicalibration. Our work presents compelling evidence that low-degree multicalibration represents a sweet spot, pairing computational and sample efficiency with strong fairness and accuracy guarantees. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 283,312 |
2204.12390 | Quantum-classical convolutional neural networks in radiological image
classification | Quantum machine learning is receiving significant attention currently, but its usefulness in comparison to classical machine learning techniques for practical applications remains unclear. However, there are indications that certain quantum machine learning algorithms might result in improved training capabilities with respect to their classical counterparts -- which might be particularly beneficial in situations with little training data available. Such situations naturally arise in medical classification tasks. Within this paper, different hybrid quantum-classical convolutional neural networks (QCCNN) with varying quantum circuit designs and encoding techniques are proposed. They are applied to two- and three-dimensional medical imaging data, e.g. featuring different, potentially malign, lesions in computed tomography scans. The performance of these QCCNNs is already similar to the one of their classical counterparts -- therefore encouraging further studies towards the direction of applying these algorithms within medical imaging tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 293,458 |
2012.00052 | Systematically Exploring Redundancy Reduction in Summarizing Long
Documents | Our analysis of large summarization datasets indicates that redundancy is a very serious problem when summarizing long documents. Yet, redundancy reduction has not been thoroughly investigated in neural summarization. In this work, we systematically explore and compare different ways to deal with redundancy when summarizing long documents. Specifically, we organize the existing methods into categories based on when and how the redundancy is considered. Then, in the context of these categories, we propose three additional methods balancing non-redundancy and importance in a general and flexible way. In a series of experiments, we show that our proposed methods achieve the state-of-the-art with respect to ROUGE scores on two scientific paper datasets, Pubmed and arXiv, while reducing redundancy significantly. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 208,998 |
2203.01977 | Audio-Visual Object Classification for Human-Robot Collaboration | Human-robot collaboration requires the contactless estimation of the physical properties of containers manipulated by a person, for example while pouring content in a cup or moving a food box. Acoustic and visual signals can be used to estimate the physical properties of such objects, which may vary substantially in shape, material and size, and also be occluded by the hands of the person. To facilitate comparisons and stimulate progress in solving this problem, we present the CORSMAL challenge and a dataset to assess the performance of the algorithms through a set of well-defined performance scores. The tasks of the challenge are the estimation of the mass, capacity, and dimensions of the object (container), and the classification of the type and amount of its content. A novel feature of the challenge is our real-to-simulation framework for visualising and assessing the impact of estimation errors in human-to-robot handovers. | false | false | true | false | false | false | false | true | false | false | false | true | false | false | false | false | false | true | 283,584 |
2405.20121 | A Structure-Aware Lane Graph Transformer Model for Vehicle Trajectory
Prediction | Accurate prediction of future trajectories for surrounding vehicles is vital for the safe operation of autonomous vehicles. This study proposes a Lane Graph Transformer (LGT) model with structure-aware capabilities. Its key contribution lies in encoding the map topology structure into the attention mechanism. To address variations in lane information from different directions, four Relative Positional Encoding (RPE) matrices are introduced to capture the local details of the map topology structure. Additionally, two Shortest Path Distance (SPD) matrices are employed to capture distance information between two accessible lanes. Numerical results indicate that the proposed LGT model achieves a significantly higher prediction performance on the Argoverse 2 dataset. Specifically, the minFDE$_6$ metric was decreased by 60.73% compared to the Argoverse 2 baseline model (Nearest Neighbor) and the b-minFDE$_6$ metric was reduced by 2.65% compared to the baseline LaneGCN model. Furthermore, ablation experiments demonstrated that the consideration of map topology structure led to a 4.24% drop in the b-minFDE$_6$ metric, validating the effectiveness of this model. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 459,207 |
2311.13871 | Legal Requirements Analysis | Modern software has been an integral part of everyday activities in many disciplines and application contexts. Introducing intelligent automation by leveraging artificial intelligence (AI) led to break-throughs in many fields. The effectiveness of AI can be attributed to several factors, among which is the increasing availability of data. Regulations such as the general data protection regulation (GDPR) in the European Union (EU) are introduced to ensure the protection of personal data. Software systems that collect, process, or share personal data are subject to compliance with such regulations. Developing compliant software depends heavily on addressing legal requirements stipulated in applicable regulations, a central activity in the requirements engineering (RE) phase of the software development process. RE is concerned with specifying and maintaining requirements of a system-to-be, including legal requirements. Legal agreements which describe the policies organizations implement for processing personal data can provide an additional source to regulations for eliciting legal requirements. In this chapter, we explore a variety of methods for analyzing legal requirements and exemplify them on GDPR. Specifically, we describe possible alternatives for creating machine-analyzable representations from regulations, survey the existing automated means for enabling compliance verification against regulations, and further reflect on the current challenges of legal requirements analysis. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 409,905 |
2306.11336 | Cooperative Multi-Agent Learning for Navigation via Structured State
Abstraction | Cooperative multi-agent reinforcement learning (MARL) for navigation enables agents to cooperate to achieve their navigation goals. Using emergent communication, agents learn a communication protocol to coordinate and share information that is needed to achieve their navigation tasks. In emergent communication, symbols with no pre-specified usage rules are exchanged, in which the meaning and syntax emerge through training. Learning a navigation policy along with a communication protocol in a MARL environment is highly complex due to the huge state space to be explored. To cope with this complexity, this work proposes a novel neural network architecture, for jointly learning an adaptive state space abstraction and a communication protocol among agents participating in navigation tasks. The goal is to come up with an adaptive abstractor that significantly reduces the size of the state space to be explored, without degradation in the policy performance. Simulation results show that the proposed method reaches a better policy, in terms of achievable rewards, resulting in fewer training iterations compared to the case where raw states or fixed state abstraction are used. Moreover, it is shown that a communication protocol emerges during training which enables the agents to learn better policies within fewer training iterations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | 374,559 |
2103.07820 | Dynamic Control Allocation between Onboard and Delayed Remote Control
for Unmanned Aircraft System Detect-and-Avoid | This paper develops and evaluates the performance of an allocation agent to be potentially integrated into the onboard Detect and Avoid (DAA) computer of an Unmanned Aircraft System (UAS). We consider a UAS that can be fully controlled by the onboard DAA system and by a remote human pilot. With a communication channel prone to latency, we consider a mixed initiative interaction environment, where the control authority of the UAS is dynamically allocated by the allocation agent. In an encounter with a dynamic intruder, the probability of collision may increase in the absence of pilot commands in the presence of latency. Moreover, a delayed pilot command may not result in safe resolution of the current scenario and need to be improvised. We design an optimization algorithm to reduce collision risk and refine delayed pilot commands. Towards this end, a Markov Decision Process (MDP)and its solution are employed to create a wait time map. The map consists of estimated times that the UAS can wait for the remote pilot commands at each state. A command blending algorithm is designed to select an avoidance maneuver that prioritizes the pilot intention extracted from the pilot commands. The wait time map and the command blending algorithm are implemented and integrated into a closed-loop simulator. We conduct ten thousands fast-time Monte Carlo simulations and compare the performance of the integrated setup with a standalone DAA setup. The simulation results show that the allocation agent enables the UAS to wait without inducing any near mid air collision (NMAC) and severe loss of well clear (LoWC) while positively improve pilot involvement in the encounter resolution. | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 224,707 |
2205.11912 | Physics-Embedded Neural Networks: Graph Neural PDE Solvers with Mixed
Boundary Conditions | Graph neural network (GNN) is a promising approach to learning and predicting physical phenomena described in boundary value problems, such as partial differential equations (PDEs) with boundary conditions. However, existing models inadequately treat boundary conditions essential for the reliable prediction of such problems. In addition, because of the locally connected nature of GNNs, it is difficult to accurately predict the state after a long time, where interaction between vertices tends to be global. We present our approach termed physics-embedded neural networks that considers boundary conditions and predicts the state after a long time using an implicit method. It is built based on an E(n)-equivariant GNN, resulting in high generalization performance on various shapes. We demonstrate that our model learns flow phenomena in complex shapes and outperforms a well-optimized classical solver and a state-of-the-art machine learning model in speed-accuracy trade-off. Therefore, our model can be a useful standard for realizing reliable, fast, and accurate GNN-based PDE solvers. The code is available at https://github.com/yellowshippo/penn-neurips2022. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 298,331 |
1607.07920 | Coded Caching with Low Subpacketization Levels | Caching is popular technique in content delivery networks that allows for reductions in transmission rates from the content-hosting server to the end users. Coded caching is a generalization of conventional caching that considers the possibility of coding in the caches and transmitting coded signals from the server. Prior results in this area demonstrate that huge reductions in transmission rates are possible and this makes coded caching an attractive option for the next generation of content-delivery networks. However, these results require that each file hosted in the server be partitioned into a large number (i.e., the subpacketization level) of non-overlapping subfiles. From a practical perspective, this is problematic as it means that prior schemes are only applicable when the size of the files is extremely large. In this work, we propose a novel coded caching scheme that enjoys a significantly lower subpacketization level than prior schemes, while only suffering a marginal increase in the transmission rate. In particular, for a fixed cache size, the scaling with the number of users is such that the increase in transmission rate is negligible, but the decrease in subpacketization level is exponential. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 59,088 |
1208.3848 | Modelling the effect of gap junctions on tissue-level cardiac
electrophysiology | When modelling tissue-level cardiac electrophysiology, continuum approximations to the discrete cell-level equations are used to maintain computational tractability. One of the most commonly used models is represented by the bidomain equations, the derivation of which relies on a homogenisation technique to construct a suitable approximation to the discrete model. This derivation does not explicitly account for the presence of gap junctions connecting one cell to another. It has been seen experimentally [Rohr, Cardiovasc. Res. 2004] that these gap junctions have a marked effect on the propagation of the action potential, specifically as the upstroke of the wave passes through the gap junction. In this paper we explicitly include gap junctions in a both a 2D discrete model of cardiac electrophysiology, and the corresponding continuum model, on a simplified cell geometry. Using these models we compare the results of simulations using both continuum and discrete systems. We see that the form of the action potential as it passes through gap junctions cannot be replicated using a continuum model, and that the underlying propagation speed of the action potential ceases to match up between models when gap junctions are introduced. In addition, the results of the discrete simulations match the characteristics of those shown in Rohr 2004. From this, we suggest that a hybrid model -- a discrete system following the upstroke of the action potential, and a continuum system elsewhere -- may give a more accurate description of cardiac electrophysiology. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 18,148 |
2412.16590 | Quantum $(r,\delta)$-locally recoverable codes | A classical $(r,\delta)$-locally recoverable code is an error-correcting code such that, for each coordinate $c_i$ of a codeword, there exists a set of at most $r+ \delta -1$ coordinates containing $c_i$ which allow us to correct any $\delta -1$ erasures in that set. These codes are useful for avoiding loss of information in large scale distributed and cloud storage systems. In this paper, we introduce quantum $(r,\delta)$-locally recoverable codes and give a necessary and sufficient condition for a quantum stabilizer code $Q(C)$ to be $(r,\delta)$-locally recoverable. Our condition depends only on the puncturing and shortening at suitable sets of both the symplectic self-orthogonal code $C$ used for constructing $Q(C)$ and its symplectic dual $C^{\perp_s}$. When a quantum stabilizer code comes from a Hermitian or Euclidean dual-containing code, and under an extra condition, we show that there is an equivalence between the classical and quantum concepts of $(r,\delta)$-local recoverability. A Singleton-like bound is stated in this case and examples of optimal stabilizer $(r,\delta)$-locally recoverable codes are given. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 519,599 |
2303.13867 | Few Shot Medical Image Segmentation with Cross Attention Transformer | Medical image segmentation has made significant progress in recent years. Deep learning-based methods are recognized as data-hungry techniques, requiring large amounts of data with manual annotations. However, manual annotation is expensive in the field of medical image analysis, which requires domain-specific expertise. To address this challenge, few-shot learning has the potential to learn new classes from only a few examples. In this work, we propose a novel framework for few-shot medical image segmentation, termed CAT-Net, based on cross masked attention Transformer. Our proposed network mines the correlations between the support image and query image, limiting them to focus only on useful foreground information and boosting the representation capacity of both the support prototype and query features. We further design an iterative refinement framework that refines the query image segmentation iteratively and promotes the support feature in turn. We validated the proposed method on three public datasets: Abd-CT, Abd-MRI, and Card-MRI. Experimental results demonstrate the superior performance of our method compared to state-of-the-art methods and the effectiveness of each component. Code: https://github.com/hust-linyi/CAT-Net. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 353,866 |
2104.09011 | Few-shot Learning for Topic Modeling | Topic models have been successfully used for analyzing text documents. However, with existing topic models, many documents are required for training. In this paper, we propose a neural network-based few-shot learning method that can learn a topic model from just a few documents. The neural networks in our model take a small number of documents as inputs, and output topic model priors. The proposed method trains the neural networks such that the expected test likelihood is improved when topic model parameters are estimated by maximizing the posterior probability using the priors based on the EM algorithm. Since each step in the EM algorithm is differentiable, the proposed method can backpropagate the loss through the EM algorithm to train the neural networks. The expected test likelihood is maximized by a stochastic gradient descent method using a set of multiple text corpora with an episodic training framework. In our experiments, we demonstrate that the proposed method achieves better perplexity than existing methods using three real-world text document sets. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 231,069 |
2001.00121 | A Freeform Dielectric Metasurface Modeling Approach Based on Deep Neural
Networks | Metasurfaces have shown promising potentials in shaping optical wavefronts while remaining compact compared to bulky geometric optics devices. Design of meta-atoms, the fundamental building blocks of metasurfaces, relies on trial-and-error method to achieve target electromagnetic responses. This process includes the characterization of an enormous amount of different meta-atom designs with different physical and geometric parameters, which normally demands huge computational resources. In this paper, a deep learning-based metasurface/meta-atom modeling approach is introduced to significantly reduce the characterization time while maintaining accuracy. Based on a convolutional neural network (CNN) structure, the proposed deep learning network is able to model meta-atoms with free-form 2D patterns and different lattice sizes, material refractive indexes and thicknesses. Moreover, the presented approach features the capability to predict meta-atoms' wide spectrum responses in the timescale of milliseconds, which makes it attractive for applications such as fast meta-atom/metasurface on-demand designs and optimizations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 159,137 |
2303.15087 | Prediction of Time and Distance of Trips Using Explainable
Attention-based LSTMs | In this paper, we propose machine learning solutions to predict the time of future trips and the possible distance the vehicle will travel. For this prediction task, we develop and investigate four methods. In the first method, we use long short-term memory (LSTM)-based structures specifically designed to handle multi-dimensional historical data of trip time and distances simultaneously. Using it, we predict the future trip time and forecast the distance a vehicle will travel by concatenating the outputs of LSTM networks through fully connected layers. The second method uses attention-based LSTM networks (At-LSTM) to perform the same tasks. The third method utilizes two LSTM networks in parallel, one for forecasting the time of the trip and the other for predicting the distance. The output of each LSTM is then concatenated through fully connected layers. Finally, the last model is based on two parallel At-LSTMs, where similarly, each At-LSTM predicts time and distance separately through fully connected layers. Among the proposed methods, the most advanced one, i.e., parallel At-LSTM, predicts the next trip's distance and time with 3.99% error margin where it is 23.89% better than LSTM, the first method. We also propose TimeSHAP as an explainability method for understanding how the networks perform learning and model the sequence of information. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 354,356 |
2012.12137 | Projected Stochastic Gradient Langevin Algorithms for Constrained
Sampling and Non-Convex Learning | Langevin algorithms are gradient descent methods with additive noise. They have been used for decades in Markov chain Monte Carlo (MCMC) sampling, optimization, and learning. Their convergence properties for unconstrained non-convex optimization and learning problems have been studied widely in the last few years. Other work has examined projected Langevin algorithms for sampling from log-concave distributions restricted to convex compact sets. For learning and optimization, log-concave distributions correspond to convex losses. In this paper, we analyze the case of non-convex losses with compact convex constraint sets and IID external data variables. We term the resulting method the projected stochastic gradient Langevin algorithm (PSGLA). We show the algorithm achieves a deviation of $O(T^{-1/4}(\log T)^{1/2})$ from its target distribution in 1-Wasserstein distance. For optimization and learning, we show that the algorithm achieves $\epsilon$-suboptimal solutions, on average, provided that it is run for a time that is polynomial in $\epsilon^{-1}$ and slightly super-exponential in the problem dimension. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 212,841 |
1208.1056 | Sequential Estimation Methods from Inclusion Principle | In this paper, we propose new sequential estimation methods based on inclusion principle. The main idea is to reformulate the estimation problems as constructing sequential random intervals and use confidence sequences to control the associated coverage probabilities. In contrast to existing asymptotic sequential methods, our estimation procedures rigorously guarantee the pre-specified levels of confidence. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 17,949 |
2410.03577 | Look Twice Before You Answer: Memory-Space Visual Retracing for
Hallucination Mitigation in Multimodal Large Language Models | Despite their impressive capabilities, Multimodal Large Language Models (MLLMs) are susceptible to hallucinations, especially assertively fabricating content not present in the visual inputs. To address the aforementioned challenge, we follow a common cognitive process - when one's initial memory of critical on-sight details fades, it is intuitive to look at them a second time to seek a factual and accurate answer. Therefore, we introduce Memory-space Visual Retracing (MemVR), a novel hallucination mitigation paradigm that without the need for external knowledge retrieval or additional fine-tuning. In particular, we treat visual prompts as supplementary evidence to be reinjected into MLLMs via Feed Forward Network (FFN) as key-value memory, when the model is uncertain or even amnesic about question-relevant visual memories. Comprehensive experimental evaluations demonstrate that MemVR significantly mitigates hallucination issues across various MLLMs and excels in general benchmarks without incurring added time overhead, thus emphasizing its potential for widespread applicability. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 494,859 |
2204.02219 | SNUG: Self-Supervised Neural Dynamic Garments | We present a self-supervised method to learn dynamic 3D deformations of garments worn by parametric human bodies. State-of-the-art data-driven approaches to model 3D garment deformations are trained using supervised strategies that require large datasets, usually obtained by expensive physics-based simulation methods or professional multi-camera capture setups. In contrast, we propose a new training scheme that removes the need for ground-truth samples, enabling self-supervised training of dynamic 3D garment deformations. Our key contribution is to realize that physics-based deformation models, traditionally solved in a frame-by-frame basis by implicit integrators, can be recasted as an optimization problem. We leverage such optimization-based scheme to formulate a set of physics-based loss terms that can be used to train neural networks without precomputing ground-truth data. This allows us to learn models for interactive garments, including dynamic deformations and fine wrinkles, with two orders of magnitude speed up in training time compared to state-of-the-art supervised methods | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 289,863 |
2205.00267 | Probing Cross-Lingual Lexical Knowledge from Multilingual Sentence
Encoders | Pretrained multilingual language models (LMs) can be successfully transformed into multilingual sentence encoders (SEs; e.g., LaBSE, xMPNet) via additional fine-tuning or model distillation with parallel data. However, it remains unclear how to best leverage them to represent sub-sentence lexical items (i.e., words and phrases) in cross-lingual lexical tasks. In this work, we probe SEs for the amount of cross-lingual lexical knowledge stored in their parameters, and compare them against the original multilingual LMs. We also devise a simple yet efficient method for exposing the cross-lingual lexical knowledge by means of additional fine-tuning through inexpensive contrastive learning that requires only a small amount of word translation pairs. Using bilingual lexical induction (BLI), cross-lingual lexical semantic similarity, and cross-lingual entity linking as lexical probing tasks, we report substantial gains on standard benchmarks (e.g., +10 Precision@1 points in BLI). The results indicate that the SEs such as LaBSE can be 'rewired' into effective cross-lingual lexical encoders via the contrastive learning procedure, and that they contain more cross-lingual lexical knowledge than what 'meets the eye' when they are used as off-the-shelf SEs. This way, we also provide an effective tool for harnessing 'covert' multilingual lexical knowledge hidden in multilingual sentence encoders. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 294,190 |
1905.12782 | MaxiMin Active Learning in Overparameterized Model Classes} | Generating labeled training datasets has become a major bottleneck in Machine Learning (ML) pipelines. Active ML aims to address this issue by designing learning algorithms that automatically and adaptively select the most informative examples for labeling so that human time is not wasted labeling irrelevant, redundant, or trivial examples. This paper proposes a new approach to active ML with nonparametric or overparameterized models such as kernel methods and neural networks. In the context of binary classification, the new approach is shown to possess a variety of desirable properties that allow active learning algorithms to automatically and efficiently identify decision boundaries and data clusters. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 132,874 |
1604.02426 | CNN Image Retrieval Learns from BoW: Unsupervised Fine-Tuning with Hard
Examples | Convolutional Neural Networks (CNNs) achieve state-of-the-art performance in many computer vision tasks. However, this achievement is preceded by extreme manual annotation in order to perform either training from scratch or fine-tuning for the target task. In this work, we propose to fine-tune CNN for image retrieval from a large collection of unordered images in a fully automated manner. We employ state-of-the-art retrieval and Structure-from-Motion (SfM) methods to obtain 3D models, which are used to guide the selection of the training data for CNN fine-tuning. We show that both hard positive and hard negative examples enhance the final performance in particular object retrieval with compact codes. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 54,331 |
2105.06756 | Long Short-term Memory RNN | This paper is based on a machine learning project at the Norwegian University of Science and Technology, fall 2020. The project was initiated with a literature review on the latest developments within time-series forecasting methods in the scientific community over the past five years. The paper summarizes the essential aspects of this research. Furthermore, in this paper, we introduce an LSTM cell's architecture, and explain how different components go together to alter the cell's memory and predict the output. Also, the paper provides the necessary formulas and foundations to calculate a forward iteration through an LSTM. Then, the paper refers to some practical applications and research that emphasize the strength and weaknesses of LSTMs, shown within the time-series domain and the natural language processing (NLP) domain. Finally, alternative statistical methods for time series predictions are highlighted, where the paper outline ARIMA and exponential smoothing. Nevertheless, as LSTMs can be viewed as a complex architecture, the paper assumes that the reader has some knowledge of essential machine learning aspects, such as the multi-layer perceptron, activation functions, overfitting, backpropagation, bias, over- and underfitting, and more. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 235,223 |
1712.04567 | Practical Bayesian optimization in the presence of outliers | Inference in the presence of outliers is an important field of research as outliers are ubiquitous and may arise across a variety of problems and domains. Bayesian optimization is method that heavily relies on probabilistic inference. This allows outstanding sample efficiency because the probabilistic machinery provides a memory of the whole optimization process. However, that virtue becomes a disadvantage when the memory is populated with outliers, inducing bias in the estimation. In this paper, we present an empirical evaluation of Bayesian optimization methods in the presence of outliers. The empirical evidence shows that Bayesian optimization with robust regression often produces suboptimal results. We then propose a new algorithm which combines robust regression (a Gaussian process with Student-t likelihood) with outlier diagnostics to classify data points as outliers or inliers. By using an scheduler for the classification of outliers, our method is more efficient and has better convergence over the standard robust regression. Furthermore, we show that even in controlled situations with no expected outliers, our method is able to produce better results. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 86,620 |
2301.06774 | Temporal Dynamics of Coordinated Online Behavior: Stability, Archetypes,
and Influence | Large-scale online campaigns, malicious or otherwise, require a significant degree of coordination among participants, which sparked interest in the study of coordinated online behavior. State-of-the-art methods for detecting coordinated behavior perform static analyses, disregarding the temporal dynamics of coordination. Here, we carry out the first dynamic analysis of coordinated behavior. To reach our goal we build a multiplex temporal network and we perform dynamic community detection to identify groups of users that exhibited coordinated behaviors in time. Thanks to our novel approach we find that: (i) coordinated communities feature variable degrees of temporal instability; (ii) dynamic analyses are needed to account for such instability, and results of static analyses can be unreliable and scarcely representative of unstable communities; (iii) some users exhibit distinct archetypal behaviors that have important practical implications; (iv) content and network characteristics contribute to explaining why users leave and join coordinated communities. Our results demonstrate the advantages of dynamic analyses and open up new directions of research on the unfolding of online debates, on the strategies of coordinated communities, and on the patterns of online influence. | false | false | false | true | false | false | false | false | true | false | false | false | false | true | false | false | false | false | 340,735 |
1404.3706 | Using industrial robot to manipulate the measured object in CMM | Coordinate measuring machines (CMMs) are widely used to check dimensions of manufactured parts, especially in automotive industry. The major obstacles in automation of these measurements are fixturing and clamping assemblies, which are required in order to position the measured object within the CMM. This paper describes how an industrial robot can be used to manipulate the measured object within the CMM work space, in order to enable automation of complex geometry measurement. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 32,337 |
1707.06357 | Improving Discourse Relation Projection to Build Discourse Annotated
Corpora | The naive approach to annotation projection is not effective to project discourse annotations from one language to another because implicit discourse relations are often changed to explicit ones and vice-versa in the translation. In this paper, we propose a novel approach based on the intersection between statistical word-alignment models to identify unsupported discourse annotations. This approach identified 65% of the unsupported annotations in the English-French parallel sentences from Europarl. By filtering out these unsupported annotations, we induced the first PDTB-style discourse annotated corpus for French from Europarl. We then used this corpus to train a classifier to identify the discourse-usage of French discourse connectives and show a 15% improvement of F1-score compared to the classifier trained on the non-filtered annotations. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 77,409 |
2206.03429 | Generating Long Videos of Dynamic Scenes | We present a video generation model that accurately reproduces object motion, changes in camera viewpoint, and new content that arises over time. Existing video generation methods often fail to produce new content as a function of time while maintaining consistencies expected in real environments, such as plausible dynamics and object persistence. A common failure case is for content to never change due to over-reliance on inductive biases to provide temporal consistency, such as a single latent code that dictates content for the entire video. On the other extreme, without long-term consistency, generated videos may morph unrealistically between different scenes. To address these limitations, we prioritize the time axis by redesigning the temporal latent representation and learning long-term consistency from data by training on longer videos. To this end, we leverage a two-phase training strategy, where we separately train using longer videos at a low resolution and shorter videos at a high resolution. To evaluate the capabilities of our model, we introduce two new benchmark datasets with explicit focus on long-term temporal dynamics. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | true | false | false | 301,281 |
2307.05735 | Effective Latent Differential Equation Models via Attention and Multiple
Shooting | Scientific Machine Learning (SciML) is a burgeoning field that synergistically combines domain-aware and interpretable models with agnostic machine learning techniques. In this work, we introduce GOKU-UI, an evolution of the SciML generative model GOKU-nets. GOKU-UI not only broadens the original model's spectrum to incorporate other classes of differential equations, such as Stochastic Differential Equations (SDEs), but also integrates attention mechanisms and a novel multiple shooting training strategy in the latent space. These modifications have led to a significant increase in its performance in both reconstruction and forecast tasks, as demonstrated by our evaluation of simulated and empirical data. Specifically, GOKU-UI outperformed all baseline models on synthetic datasets even with a training set 16-fold smaller, underscoring its remarkable data efficiency. Furthermore, when applied to empirical human brain data, while incorporating stochastic Stuart-Landau oscillators into its dynamical core, our proposed enhancements markedly increased the model's effectiveness in capturing complex brain dynamics. This augmented version not only surpassed all baseline methods in the reconstruction task, but also demonstrated lower prediction error of future brain activity up to 15 seconds ahead. By training GOKU-UI on resting state fMRI data, we encoded whole-brain dynamics into a latent representation, learning a low-dimensional dynamical system model that could offer insights into brain functionality and open avenues for practical applications such as the classification of mental states or psychiatric conditions. Ultimately, our research provides further impetus for the field of Scientific Machine Learning, showcasing the potential for advancements when established scientific insights are interwoven with modern machine learning. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 378,839 |
2406.04344 | Verbalized Machine Learning: Revisiting Machine Learning with Language
Models | Motivated by the progress made by large language models (LLMs), we introduce the framework of verbalized machine learning (VML). In contrast to conventional machine learning (ML) models that are typically optimized over a continuous parameter space, VML constrains the parameter space to be human-interpretable natural language. Such a constraint leads to a new perspective of function approximation, where an LLM with a text prompt can be viewed as a function parameterized by the text prompt. Guided by this perspective, we revisit classical ML problems, such as regression and classification, and find that these problems can be solved by an LLM-parameterized learner and optimizer. The major advantages of VML include (1) easy encoding of inductive bias: prior knowledge about the problem and hypothesis class can be encoded in natural language and fed into the LLM-parameterized learner; (2) automatic model class selection: the optimizer can automatically select a model class based on data and verbalized prior knowledge, and it can update the model class during training; and (3) interpretable learner updates: the LLM-parameterized optimizer can provide explanations for why an update is performed. We empirically verify the effectiveness of VML, and hope that VML can serve as a stepping stone to stronger interpretability. | false | false | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | 461,646 |
cs/0306124 | Updating Probabilities | As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a ``naive space'', which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR (``coarsening at random'') in the statistical literature characterizes when ``naive'' conditioning in a naive space works. We show that the CAR condition holds rather infrequently, and we provide a procedural characterization of it, by giving a randomized algorithm that generates all and only distributions for which CAR holds. This substantially extends previous characterizations of CAR. We also consider more generalized notions of update such as Jeffrey conditioning and minimizing relative entropy (MRE). We give a generalization of the CAR condition that characterizes when Jeffrey conditioning leads to appropriate answers, and show that there exist some very simple settings in which MRE essentially never gives the right results. This generalizes and interconnects previous results obtained in the literature on CAR and MRE. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 537,899 |
1604.02610 | Network Topology Identification from Spectral Templates | Network topology inference is a cornerstone problem in statistical analyses of complex systems. In this context, the fresh look advocated here permeates benefits from convex optimization and graph signal processing, to identify the so-termed graph shift operator (encoding the network topology) given only the eigenvectors of the shift. These spectral templates can be obtained, for example, from principal component analysis of a set of graph signals defined on the particular network. The novel idea is to find a graph shift that while being consistent with the provided spectral information, it endows the network structure with certain desired properties such as sparsity. The focus is on developing efficient recovery algorithms along with identifiability conditions for two particular shifts, the adjacency matrix and the normalized graph Laplacian. Application domains include network topology identification from steady-state signals generated by a diffusion process, and design of a graph filter that facilitates the distributed implementation of a prescribed linear network operator. Numerical tests showcase the effectiveness of the proposed algorithms in recovering synthetic and structural brain networks. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 54,358 |
2205.00042 | Answer Consolidation: Formulation and Benchmarking | Current question answering (QA) systems primarily consider the single-answer scenario, where each question is assumed to be paired with one correct answer. However, in many real-world QA applications, multiple answer scenarios arise where consolidating answers into a comprehensive and non-redundant set of answers is a more efficient user interface. In this paper, we formulate the problem of answer consolidation, where answers are partitioned into multiple groups, each representing different aspects of the answer set. Then, given this partitioning, a comprehensive and non-redundant set of answers can be constructed by picking one answer from each group. To initiate research on answer consolidation, we construct a dataset consisting of 4,699 questions and 24,006 sentences and evaluate multiple models. Despite a promising performance achieved by the best-performing supervised models, we still believe this task has room for further improvements. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 294,116 |
2310.17404 | Invariance Measures for Neural Networks | Invariances in neural networks are useful and necessary for many tasks. However, the representation of the invariance of most neural network models has not been characterized. We propose measures to quantify the invariance of neural networks in terms of their internal representation. The measures are efficient and interpretable, and can be applied to any neural network model. They are also more sensitive to invariance than previously defined measures. We validate the measures and their properties in the domain of affine transformations and the CIFAR10 and MNIST datasets, including their stability and interpretability. Using the measures, we perform a first analysis of CNN models and show that their internal invariance is remarkably stable to random weight initializations, but not to changes in dataset or transformation. We believe the measures will enable new avenues of research in invariance representation. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | false | false | 403,120 |
2110.02370 | Leveraging the Inductive Bias of Large Language Models for Abstract
Textual Reasoning | Large natural language models (such as GPT-3 or T5) demonstrate impressive abilities across a range of general NLP tasks. Here, we show that the knowledge embedded in such models provides a useful inductive bias, not just on traditional NLP tasks, but also in the nontraditional task of training a symbolic reasoning engine. We observe that these engines learn quickly and generalize in a natural way that reflects human intuition. For example, training such a system to model block-stacking might naturally generalize to stacking other types of objects because of structure in the real world that has been partially captured by the language describing it. We study several abstract textual reasoning tasks, such as object manipulation and navigation, and demonstrate multiple types of generalization to novel scenarios and the symbols that comprise them. We also demonstrate the surprising utility of \textit{compositional learning}, where a learner dedicated to mastering a complicated task gains an advantage by training on relevant simpler tasks instead of jumping straight to the complicated task. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 259,090 |
1811.05013 | Blindfold Baselines for Embodied QA | We explore blindfold (question-only) baselines for Embodied Question Answering. The EmbodiedQA task requires an agent to answer a question by intelligently navigating in a simulated environment, gathering necessary visual information only through first-person vision before finally answering. Consequently, a blindfold baseline which ignores the environment and visual information is a degenerate solution, yet we show through our experiments on the EQAv1 dataset that a simple question-only baseline achieves state-of-the-art results on the EmbodiedQA task in all cases except when the agent is spawned extremely close to the object. | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | false | 113,218 |
2405.11471 | CMA-ES with Adaptive Reevaluation for Multiplicative Noise | The covariance matrix adaptation evolution strategy (CMA-ES) is a powerful optimization method for continuous black-box optimization problems. Several noise-handling methods have been proposed to bring out the optimization performance of the CMA-ES on noisy objective functions. The adaptations of the population size and the learning rate are two major approaches that perform well under additive Gaussian noise. The reevaluation technique is another technique that evaluates each solution multiple times. In this paper, we discuss the difference between those methods from the perspective of stochastic relaxation that considers the maximization of the expected utility function. We derive that the set of maximizers of the noise-independent utility, which is used in the reevaluation technique, certainly contains the optimal solution, while the noise-dependent utility, which is used in the population size and leaning rate adaptations, does not satisfy it under multiplicative noise. Based on the discussion, we develop the reevaluation adaptation CMA-ES (RA-CMA-ES), which computes two update directions using half of the evaluations and adapts the number of reevaluations based on the estimated correlation of those two update directions. The numerical simulation shows that the RA-CMA-ES outperforms the comparative method under multiplicative noise, maintaining competitive performance under additive noise. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 455,156 |
2408.04815 | Towards improving Alzheimer's intervention: a machine learning approach
for biomarker detection through combining MEG and MRI pipelines | MEG are non invasive neuroimaging techniques with excellent temporal and spatial resolution, crucial for studying brain function in dementia and Alzheimer Disease. They identify changes in brain activity at various Alzheimer stages, including preclinical and prodromal phases. MEG may detect pathological changes before clinical symptoms, offering potential biomarkers for intervention. This study evaluates classification techniques using MEG features to distinguish between healthy controls and mild cognitive impairment participants from the BioFIND study. We compare MEG based biomarkers with MRI based anatomical features, both independently and combined. We used 3 Tesla MRI and MEG data from 324 BioFIND participants;158 MCI and 166 HC. Analyses were performed using MATLAB with SPM12 and OSL toolboxes. Machine learning analyses, including 100 Monte Carlo replications of 10 fold cross validation, were conducted on sensor and source spaces. Combining MRI with MEG features achieved the best performance; 0.76 accuracy and AUC of 0.82 for GLMNET using LCMV source based MEG. MEG only analyses using LCMV and eLORETA also performed well, suggesting that combining uncorrected MEG with z-score-corrected MRI features is optimal. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 479,536 |
cs/0507044 | Defensive Universal Learning with Experts | This paper shows how universal learning can be achieved with expert advice. To this aim, we specify an experts algorithm with the following characteristics: (a) it uses only feedback from the actions actually chosen (bandit setup), (b) it can be applied with countably infinite expert classes, and (c) it copes with losses that may grow in time appropriately slowly. We prove loss bounds against an adaptive adversary. From this, we obtain a master algorithm for "reactive" experts problems, which means that the master's actions may influence the behavior of the adversary. Our algorithm can significantly outperform standard experts algorithms on such problems. Finally, we combine it with a universal expert class. The resulting universal learner performs -- in a certain sense -- almost as well as any computable strategy, for any online decision problem. We also specify the (worst-case) convergence speed, which is very slow. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 538,832 |
1905.13126 | Automatic Evaluation of Local Topic Quality | Topic models are typically evaluated with respect to the global topic distributions that they generate, using metrics such as coherence, but without regard to local (token-level) topic assignments. Token-level assignments are important for downstream tasks such as classification. Even recent models, which aim to improve the quality of these token-level topic assignments, have been evaluated only with respect to global metrics. We propose a task designed to elicit human judgments of token-level topic assignments. We use a variety of topic model types and parameters and discover that global metrics agree poorly with human assignments. Since human evaluation is expensive we propose a variety of automated metrics to evaluate topic models at a local level. Finally, we correlate our proposed metrics with human judgments from the task on several datasets. We show that an evaluation based on the percent of topic switches correlates most strongly with human judgment of local topic quality. We suggest that this new metric, which we call consistency, be adopted alongside global metrics such as topic coherence when evaluating new topic models. | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | 132,995 |
2203.12314 | Wider or Deeper Neural Network Architecture for Acoustic Scene
Classification with Mismatched Recording Devices | In this paper, we present a robust and low complexity system for Acoustic Scene Classification (ASC), the task of identifying the scene of an audio recording. We first construct an ASC baseline system in which a novel inception-residual-based network architecture is proposed to deal with the mismatched recording device issue. To further improve the performance but still satisfy the low complexity model, we apply two techniques: ensemble of multiple spectrograms and channel reduction on the ASC baseline system. By conducting extensive experiments on the benchmark DCASE 2020 Task 1A Development dataset, we achieve the best model performing an accuracy of 69.9% and a low complexity of 2.4M trainable parameters, which is competitive to the state-of-the-art ASC systems and potential for real-life applications on edge devices. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 287,226 |
2202.05839 | Abstraction for Deep Reinforcement Learning | We characterise the problem of abstraction in the context of deep reinforcement learning. Various well established approaches to analogical reasoning and associative memory might be brought to bear on this issue, but they present difficulties because of the need for end-to-end differentiability. We review developments in AI and machine learning that could facilitate their adoption. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 280,010 |
2302.11208 | KS-DETR: Knowledge Sharing in Attention Learning for Detection
Transformer | Scaled dot-product attention applies a softmax function on the scaled dot-product of queries and keys to calculate weights and then multiplies the weights and values. In this work, we study how to improve the learning of scaled dot-product attention to improve the accuracy of DETR. Our method is based on the following observations: using ground truth foreground-background mask (GT Fg-Bg Mask) as additional cues in the weights/values learning enables learning much better weights/values; with better weights/values, better values/weights can be learned. We propose a triple-attention module in which the first attention is a plain scaled dot-product attention, the second/third attention generates high-quality weights/values (with the assistance of GT Fg-Bg Mask) and shares the values/weights with the first attention to improve the quality of values/weights. The second and third attentions are removed during inference. We call our method knowledge-sharing DETR (KS-DETR), which is an extension of knowledge distillation (KD) in the way that the improved weights and values of the teachers (the second and third attentions) are directly shared, instead of mimicked, by the student (the first attention) to enable more efficient knowledge transfer from the teachers to the student. Experiments on various DETR-like methods show consistent improvements over the baseline methods on the MS COCO benchmark. Code is available at https://github.com/edocanonymous/KS-DETR. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 347,128 |
2309.10916 | What Learned Representations and Influence Functions Can Tell Us About
Adversarial Examples | Adversarial examples, deliberately crafted using small perturbations to fool deep neural networks, were first studied in image processing and more recently in NLP. While approaches to detecting adversarial examples in NLP have largely relied on search over input perturbations, image processing has seen a range of techniques that aim to characterise adversarial subspaces over the learned representations. In this paper, we adapt two such approaches to NLP, one based on nearest neighbors and influence functions and one on Mahalanobis distances. The former in particular produces a state-of-the-art detector when compared against several strong baselines; moreover, the novel use of influence functions provides insight into how the nature of adversarial example subspaces in NLP relate to those in image processing, and also how they differ depending on the kind of NLP task. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 393,200 |
2410.10286 | Back-of-the-Book Index Automation for Arabic Documents | Back-of-the-book indexes are crucial for book readability. Their manual creation is laborious and error prone. In this paper, we consider automating back-of-the-book index extraction for Arabic books to help simplify both the creation and review tasks. Given a back-of-the-book index, we aim to check and identify the accurate occurrences of index terms relative to the associated pages. To achieve this, we first define a pool of candidates for each term by extracting all possible noun phrases from paragraphs appearing on the relevant index pages. These noun phrases, identified through part-of-speech analysis, are stored in a vector database for efficient retrieval. We use several metrics, including exact matches, lexical similarity, and semantic similarity, to determine the most appropriate occurrence. The candidate with the highest score based on these metrics is chosen as the occurrence of the term. We fine-tuned a heuristic method, that considers the above metrics and that achieves an F1-score of .966 (precision=.966, recall=.966). These excellent results open the door for future work related to automation of back-of-the-book index generation and checking. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 497,998 |
1901.01325 | Optimal Decision-Making in Mixed-Agent Partially Observable Stochastic
Environments via Reinforcement Learning | Optimal decision making with limited or no information in stochastic environments where multiple agents interact is a challenging topic in the realm of artificial intelligence. Reinforcement learning (RL) is a popular approach for arriving at optimal strategies by predicating stimuli, such as the reward for following a strategy, on experience. RL is heavily explored in the single-agent context, but is a nascent concept in multiagent problems. To this end, I propose several principled model-free and partially model-based reinforcement learning approaches for several multiagent settings. In the realm of normative reinforcement learning, I introduce scalable extensions to Monte Carlo exploring starts for partially observable Markov Decision Processes (POMDP), dubbed MCES-P, where I expand the theory and algorithm to the multiagent setting. I first examine MCES-P with probably approximately correct (PAC) bounds in the context of multiagent setting, showing MCESP+PAC holds in the presence of other agents. I then propose a more sample-efficient methodology for antagonistic settings, MCESIP+PAC. For cooperative settings, I extend MCES-P to the Multiagent POMDP, dubbed MCESMP+PAC. I then explore the use of reinforcement learning as a methodology in searching for optima in realistic and latent model environments. First, I explore a parameterized Q-learning approach in modeling humans learning to reason in an uncertain, multiagent environment. Next, I propose an implementation of MCES-P, along with image segmentation, to create an adaptive team-based reinforcement learning technique to positively identify the presence of phenotypically-expressed water and pathogen stress in crop fields. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 117,945 |
2203.08656 | Learning Representation for Bayesian Optimization with Collision-free
Regularization | Bayesian optimization has been challenged by datasets with large-scale, high-dimensional, and non-stationary characteristics, which are common in real-world scenarios. Recent works attempt to handle such input by applying neural networks ahead of the classical Gaussian process to learn a latent representation. We show that even with proper network design, such learned representation often leads to collision in the latent space: two points with significantly different observations collide in the learned latent space, leading to degraded optimization performance. To address this issue, we propose LOCo, an efficient deep Bayesian optimization framework which employs a novel regularizer to reduce the collision in the learned latent space and encourage the mapping from the latent space to the objective value to be Lipschitz continuous. LOCo takes in pairs of data points and penalizes those too close in the latent space compared to their target space distance. We provide a rigorous theoretical justification for LOCo by inspecting the regret of this dynamic-embedding-based Bayesian optimization algorithm, where the neural network is iteratively retrained with the regularizer. Our empirical results demonstrate the effectiveness of LOCo on several synthetic and real-world benchmark Bayesian optimization tasks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 285,873 |
1703.00122 | RGB-D Salient Object Detection Based on Discriminative Cross-modal
Transfer Learning | In this work, we propose to utilize Convolutional Neural Networks to boost the performance of depth-induced salient object detection by capturing the high-level representative features for depth modality. We formulate the depth-induced saliency detection as a CNN-based cross-modal transfer problem to bridge the gap between the "data-hungry" nature of CNNs and the unavailability of sufficient labeled training data in depth modality. In the proposed approach, we leverage the auxiliary data from the source modality effectively by training the RGB saliency detection network to obtain the task-specific pre-understanding layers for the target modality. Meanwhile, we exploit the depth-specific information by pre-training a modality classification network that encourages modal-specific representations during the optimizing course. Thus, it could make the feature representations of the RGB and depth modalities as discriminative as possible. These two modules are pre-trained independently and then stitched to initialize and optimize the eventual depth-induced saliency detection model. Experiments demonstrate the effectiveness of the proposed novel pre-training strategy as well as the significant and consistent improvements of the proposed approach over other state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 69,112 |
2111.13154 | Country-wide Retrieval of Forest Structure From Optical and SAR
Satellite Imagery With Deep Ensembles | Monitoring and managing Earth's forests in an informed manner is an important requirement for addressing challenges like biodiversity loss and climate change. While traditional in situ or aerial campaigns for forest assessments provide accurate data for analysis at regional level, scaling them to entire countries and beyond with high temporal resolution is hardly possible. In this work, we propose a method based on deep ensembles that densely estimates forest structure variables at country-scale with 10-meter resolution, using freely available satellite imagery as input. Our method jointly transforms Sentinel-2 optical images and Sentinel-1 synthetic-aperture radar images into maps of five different forest structure variables: 95th height percentile, mean height, density, Gini coefficient, and fractional cover. We train and test our model on reference data from 41 airborne laser scanning missions across Norway and demonstrate that it is able to generalize to unseen test regions, achieving normalized mean absolute errors between 11% and 15%, depending on the variable. Our work is also the first to propose a variant of so-called Bayesian deep learning to densely predict multiple forest structure variables with well-calibrated uncertainty estimates from satellite imagery. The uncertainty information increases the trustworthiness of the model and its suitability for downstream tasks that require reliable confidence estimates as a basis for decision making. We present an extensive set of experiments to validate the accuracy of the predicted maps as well as the quality of the predicted uncertainties. To demonstrate scalability, we provide Norway-wide maps for the five forest structure variables. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 268,205 |
2305.14441 | Exploring Contrast Consistency of Open-Domain Question Answering Systems
on Minimally Edited Questions | Contrast consistency, the ability of a model to make consistently correct predictions in the presence of perturbations, is an essential aspect in NLP. While studied in tasks such as sentiment analysis and reading comprehension, it remains unexplored in open-domain question answering (OpenQA) due to the difficulty of collecting perturbed questions that satisfy factuality requirements. In this work, we collect minimally edited questions as challenging contrast sets to evaluate OpenQA models. Our collection approach combines both human annotation and large language model generation. We find that the widely used dense passage retriever (DPR) performs poorly on our contrast sets, despite fitting the training set well and performing competitively on standard test sets. To address this issue, we introduce a simple and effective query-side contrastive loss with the aid of data augmentation to improve DPR training. Our experiments on the contrast sets demonstrate that DPR's contrast consistency is improved without sacrificing its accuracy on the standard test sets. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 367,031 |
2104.06529 | BERT Embeddings Can Track Context in Conversational Search | The use of conversational assistants to search for information is becoming increasingly more popular among the general public, pushing the research towards more advanced and sophisticated techniques. In the last few years, in particular, the interest in conversational search is increasing, not only because of the generalization of conversational assistants but also because conversational search is a step forward in allowing a more natural interaction with the system. In this work, the focus is on exploring the context present of the conversation via the historical utterances and respective embeddings with the aim of developing a conversational search system that helps people search for information in a natural way. In particular, this system must be able to understand the context where the question is posed, tracking the current state of the conversation and detecting mentions to previous questions and answers. We achieve this by using a context-tracking component based on neural query-rewriting models. Another crucial aspect of the system is to provide the most relevant answers given the question and the conversational history. To achieve this objective, we used a Transformer-based re-ranking method and expanded this architecture to use the conversational context. The results obtained with the system developed showed the advantages of using the context present in the natural language utterances and in the neural embeddings generated throughout the conversation. | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | 230,104 |
1805.06413 | CASCADE: Contextual Sarcasm Detection in Online Discussion Forums | The literature in automated sarcasm detection has mainly focused on lexical, syntactic and semantic-level analysis of text. However, a sarcastic sentence can be expressed with contextual presumptions, background and commonsense knowledge. In this paper, we propose CASCADE (a ContextuAl SarCasm DEtector) that adopts a hybrid approach of both content and context-driven modeling for sarcasm detection in online social media discussions. For the latter, CASCADE aims at extracting contextual information from the discourse of a discussion thread. Also, since the sarcastic nature and form of expression can vary from person to person, CASCADE utilizes user embeddings that encode stylometric and personality features of the users. When used along with content-based feature extractors such as Convolutional Neural Networks (CNNs), we see a significant boost in the classification performance on a large Reddit corpus. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 97,600 |
1901.07851 | Computer Vision and Metrics Learning for Hypothesis Testing: An
Application of Q-Q Plot for Normality Test | This paper proposes a new deep-learning method to construct test statistics by computer vision and metrics learning. The application highlighted in this paper is applying computer vision on Q-Q plot to construct a new test statistic for normality test. To the best of our knowledge, there is no similar application documented in the literature. Traditionally, there are two families of approaches for verifying the probability distribution of a random variable. Researchers either subjectively assess the Q-Q plot or objectively use a mathematical formula, such as Kolmogorov-Smirnov test, to formally conduct a normality test. Graphical assessment by human beings is not rigorous whereas normality test statistics may not be accurate enough when the uniformly most powerful test does not exist. It may take tens of years for statistician to develop a new test statistic that is more powerful statistically. Our proposed method integrates four components based on deep learning: an image representation learning component of a Q-Q plot, a dimension reduction component, a metrics learning component that best quantifies the differences between two Q-Q plots for normality test, and a new normality hypothesis testing process. Our experimentation results show that the machine-learning-based test statistics can outperform several widely-used traditional normality tests. This study provides convincing evidence that the proposed method could objectively create a powerful test statistic based on Q-Q plots and this method could be modified to construct many more powerful test statistics for other applications in the future. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 119,316 |
2003.04297 | Improved Baselines with Momentum Contrastive Learning | Contrastive unsupervised learning has recently shown encouraging progress, e.g., in Momentum Contrast (MoCo) and SimCLR. In this note, we verify the effectiveness of two of SimCLR's design improvements by implementing them in the MoCo framework. With simple modifications to MoCo---namely, using an MLP projection head and more data augmentation---we establish stronger baselines that outperform SimCLR and do not require large training batches. We hope this will make state-of-the-art unsupervised learning research more accessible. Code will be made public. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 167,520 |
1807.00966 | Supervision-by-Registration: An Unsupervised Approach to Improve the
Precision of Facial Landmark Detectors | In this paper, we present supervision-by-registration, an unsupervised approach to improve the precision of facial landmark detectors on both images and video. Our key observation is that the detections of the same landmark in adjacent frames should be coherent with registration, i.e., optical flow. Interestingly, the coherency of optical flow is a source of supervision that does not require manual labeling, and can be leveraged during detector training. For example, we can enforce in the training loss function that a detected landmark at frame$_{t-1}$ followed by optical flow tracking from frame$_{t-1}$ to frame$_t$ should coincide with the location of the detection at frame$_t$. Essentially, supervision-by-registration augments the training loss function with a registration loss, thus training the detector to have output that is not only close to the annotations in labeled images, but also consistent with registration on large amounts of unlabeled videos. End-to-end training with the registration loss is made possible by a differentiable Lucas-Kanade operation, which computes optical flow registration in the forward pass, and back-propagates gradients that encourage temporal coherency in the detector. The output of our method is a more precise image-based facial landmark detector, which can be applied to single images or video. With supervision-by-registration, we demonstrate (1) improvements in facial landmark detection on both images (300W, ALFW) and video (300VW, Youtube-Celebrities), and (2) significant reduction of jittering in video detections. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 101,958 |
2312.10440 | Weight-Entanglement Meets Gradient-Based Neural Architecture Search | Weight sharing is a fundamental concept in neural architecture search (NAS), enabling gradient-based methods to explore cell-based architecture spaces significantly faster than traditional blackbox approaches. In parallel, weight \emph{entanglement} has emerged as a technique for intricate parameter sharing among architectures within macro-level search spaces. %However, the macro structure of such spaces poses compatibility challenges for gradient-based NAS methods. %As a result, blackbox optimization methods have been commonly employed, particularly in conjunction with supernet training, to maintain search efficiency. %Due to the inherent differences in the structure of these search spaces, these Since weight-entanglement poses compatibility challenges for gradient-based NAS methods, these two paradigms have largely developed independently in parallel sub-communities. This paper aims to bridge the gap between these sub-communities by proposing a novel scheme to adapt gradient-based methods for weight-entangled spaces. This enables us to conduct an in-depth comparative assessment and analysis of the performance of gradient-based NAS in weight-entangled search spaces. Our findings reveal that this integration of weight-entanglement and gradient-based NAS brings forth the various benefits of gradient-based methods (enhanced performance, improved supernet training properties and superior any-time performance), while preserving the memory efficiency of weight-entangled spaces. The code for our work is openly accessible \href{https://anonymous.4open.science/r/TangleNAS-527C}{here} | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 416,173 |
2408.05955 | Probabilistic Vision-Language Representation for Weakly Supervised
Temporal Action Localization | Weakly supervised temporal action localization (WTAL) aims to detect action instances in untrimmed videos using only video-level annotations. Since many existing works optimize WTAL models based on action classification labels, they encounter the task discrepancy problem (i.e., localization-by-classification). To tackle this issue, recent studies have attempted to utilize action category names as auxiliary semantic knowledge through vision-language pre-training (VLP). However, there are still areas where existing research falls short. Previous approaches primarily focused on leveraging textual information from language models but overlooked the alignment of dynamic human action and VLP knowledge in a joint space. Furthermore, the deterministic representation employed in previous studies struggles to capture fine-grained human motions. To address these problems, we propose a novel framework that aligns human action knowledge and VLP knowledge in a probabilistic embedding space. Moreover, we propose intra- and inter-distribution contrastive learning to enhance the probabilistic embedding space based on statistical similarities. Extensive experiments and ablation studies reveal that our method significantly outperforms all previous state-of-the-art methods. Code is available at https://github.com/sejong-rcv/PVLR. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 480,019 |
1903.10930 | Exploring Confidence Measures for Word Spotting in Heterogeneous
Datasets | In recent years, convolutional neural networks (CNNs) took over the field of document analysis and they became the predominant model for word spotting. Especially attribute CNNs, which learn the mapping between a word image and an attribute representation, showed exceptional performances. The drawback of this approach is the overconfidence of neural networks when used out of their training distribution. In this paper, we explore different metrics for quantifying the confidence of a CNN in its predictions, specifically on the retrieval problem of word spotting. With these confidence measures, we limit the inability of a retrieval list to reject certain candidates. We investigate four different approaches that are either based on the network's attribute estimations or make use of a surrogate model. Our approach also aims at answering the question for which part of a dataset the retrieval system gives reliable results. We further show that there exists a direct relation between the proposed confidence measures and the quality of an estimated attribute representation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 125,396 |
2407.21069 | High-Dimensional Fault Tolerance Testing of Highly Automated Vehicles
Based on Low-Rank Models | Ensuring fault tolerance of Highly Automated Vehicles (HAVs) is crucial for their safety due to the presence of potentially severe faults. Hence, Fault Injection (FI) testing is conducted by practitioners to evaluate the safety level of HAVs. To fully cover test cases, various driving scenarios and fault settings should be considered. However, due to numerous combinations of test scenarios and fault settings, the testing space can be complex and high-dimensional. In addition, evaluating performance in all newly added scenarios is resource-consuming. The rarity of critical faults that can cause security problems further strengthens the challenge. To address these challenges, we propose to accelerate FI testing under the low-rank Smoothness Regularized Matrix Factorization (SRMF) framework. We first organize the sparse evaluated data into a structured matrix based on its safety values. Then the untested values are estimated by the correlation captured by the matrix structure. To address high dimensionality, a low-rank constraint is imposed on the testing space. To exploit the relationships between existing scenarios and new scenarios and capture the local regularity of critical faults, three types of smoothness regularization are further designed as a complement. We conduct experiments on car following and cut in scenarios. The results indicate that SRMF has the lowest prediction error in various scenarios and is capable of predicting rare critical faults compared to other machine learning models. In addition, SRMF can achieve 1171 acceleration rate, 99.3% precision and 91.1% F1 score in identifying critical faults. To the best of our knowledge, this is the first work to introduce low-rank models to FI testing of HAVs. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | true | 477,400 |
2310.12522 | Named Entity Recognition for Monitoring Plant Health Threats in Tweets:
a ChouBERT Approach | An important application scenario of precision agriculture is detecting and measuring crop health threats using sensors and data analysis techniques. However, the textual data are still under-explored among the existing solutions due to the lack of labelled data and fine-grained semantic resources. Recent research suggests that the increasing connectivity of farmers and the emergence of online farming communities make social media like Twitter a participatory platform for detecting unfamiliar plant health events if we can extract essential information from unstructured textual data. ChouBERT is a French pre-trained language model that can identify Tweets concerning observations of plant health issues with generalizability on unseen natural hazards. This paper tackles the lack of labelled data by further studying ChouBERT's know-how on token-level annotation tasks over small labeled sets. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 401,052 |
2501.04966 | Emergence of Painting Ability via Recognition-Driven Evolution | From Paleolithic cave paintings to Impressionism, human painting has evolved to depict increasingly complex and detailed scenes, conveying more nuanced messages. This paper attempts to emerge this artistic capability by simulating the evolutionary pressures that enhance visual communication efficiency. Specifically, we present a model with a stroke branch and a palette branch that together simulate human-like painting. The palette branch learns a limited colour palette, while the stroke branch parameterises each stroke using B\'ezier curves to render an image, subsequently evaluated by a high-level recognition module. We quantify the efficiency of visual communication by measuring the recognition accuracy achieved with machine vision. The model then optimises the control points and colour choices for each stroke to maximise recognition accuracy with minimal strokes and colours. Experimental results show that our model achieves superior performance in high-level recognition tasks, delivering artistic expression and aesthetic appeal, especially in abstract sketches. Additionally, our approach shows promise as an efficient bit-level image compression technique, outperforming traditional methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 523,413 |
1904.12272 | A Block Sparsity Based Estimator for mmWave Massive MIMO Channels with
Beam Squint | Multiple-input multiple-output (MIMO) millimeter wave (mmWave) communication is a key technology for next generation wireless networks. One of the consequences of utilizing a large number of antennas with an increased bandwidth is that array steering vectors vary among different subcarriers. Due to this effect, known as beam squint, the conventional channel model is no longer applicable for mmWave massive MIMO systems. In this paper, we study channel estimation under the resulting non-standard model. To that aim, we first analyze the beam squint effect from an array signal processing perspective, resulting in a model which sheds light on the angle-delay sparsity of mmWave transmission. We next design a compressive sensing based channel estimation algorithm which utilizes the shift-invariant block-sparsity of this channel model. The proposed algorithm jointly computes the off-grid angles, the off-grid delays, and the complex gains of the multi-path channel. We show that the newly proposed scheme reflects the mmWave channel more accurately and results in improved performance compared to traditional approaches. We then demonstrate how this approach can be applied to recover both the uplink as well as the downlink channel in frequency division duplex (FDD) systems, by exploiting the angle-delay reciprocity of mmWave channels. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 129,064 |
cs/0411074 | Building Chinese Lexicons from Scratch by Unsupervised Short Document
Self-Segmentation | Chinese text segmentation is a well-known and difficult problem. On one side, there is not a simple notion of "word" in Chinese language making really hard to implement rule-based systems to segment written texts, thus lexicons and statistical information are usually employed to achieve such a task. On the other side, any piece of Chinese text usually includes segments present neither in the lexicons nor in the training data. Even worse, such unseen sequences can be segmented into a number of totally unrelated words making later processing phases difficult. For instance, using a lexicon-based system the sequence ???(Baluozuo, Barroso, current president-designate of the European Commission) can be segmented into ?(ba, to hope, to wish) and ??(luozuo, an undefined word) changing completely the meaning of the sentence. A new and extremely simple algorithm specially suited to work over short Chinese documents is introduced. This new algorithm performs text "self-segmentation" producing results comparable to those achieved by native speakers without using either lexicons or any statistical information beyond the obtained from the input text. Furthermore, it is really robust for finding new "words", especially proper nouns, and it is well suited to build lexicons from scratch. Some preliminary results are provided in addition to examples of its employment. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 538,408 |
1906.02989 | Learning Representations of Graph Data -- A Survey | Deep Neural Networks have shown tremendous success in the area of object recognition, image classification and natural language processing. However, designing optimal Neural Network architectures that can learn and output arbitrary graphs is an ongoing research problem. The objective of this survey is to summarize and discuss the latest advances in methods to Learn Representations of Graph Data. We start by identifying commonly used types of graph data and review basics of graph theory. This is followed by a discussion of the relationships between graph kernel methods and neural networks. Next we identify the major approaches used for learning representations of graph data namely: Kernel approaches, Convolutional approaches, Graph neural networks approaches, Graph embedding approaches and Probabilistic approaches. A variety of methods under each of the approaches are discussed and the survey is concluded with a brief discussion of the future of learning representation of graph data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 134,242 |
2501.09608 | Metric Learning with Progressive Self-Distillation for Audio-Visual
Embedding Learning | Metric learning projects samples into an embedded space, where similarities and dissimilarities are quantified based on their learned representations. However, existing methods often rely on label-guided representation learning, where representations of different modalities, such as audio and visual data, are aligned based on annotated labels. This approach tends to underutilize latent complex features and potential relationships inherent in the distributions of audio and visual data that are not directly tied to the labels, resulting in suboptimal performance in audio-visual embedding learning. To address this issue, we propose a novel architecture that integrates cross-modal triplet loss with progressive self-distillation. Our method enhances representation learning by leveraging inherent distributions and dynamically refining soft audio-visual alignments -- probabilistic alignments between audio and visual data that capture the inherent relationships beyond explicit labels. Specifically, the model distills audio-visual distribution-based knowledge from annotated labels in a subset of each batch. This self-distilled knowledge is used t | false | false | true | false | true | true | false | false | false | false | false | true | false | false | false | false | false | true | 525,200 |
2407.20814 | Embracing Fairness in Consumer Electricity Markets using an Automatic
Market Maker | As consumer flexibility becomes expected, it is important that the market mechanisms which attain that flexibility are perceived as fair. We set out fairness issues in energy markets today, and propose a market design to address them. Consumption is categorised as either essential or flexible with different prices and reliability levels for each. Prices are generated by an Automatic Market Maker (AMM) based on instantaneous scarcity and resource is allocated using a novel Fair Play algorithm. We empirically show the performance of the system over 1 year for 101 UK households and benchmark its performance against more classical approaches. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 477,296 |
2306.01325 | LyricSIM: A novel Dataset and Benchmark for Similarity Detection in
Spanish Song LyricS | In this paper, we present a new dataset and benchmark tailored to the task of semantic similarity in song lyrics. Our dataset, originally consisting of 2775 pairs of Spanish songs, was annotated in a collective annotation experiment by 63 native annotators. After collecting and refining the data to ensure a high degree of consensus and data integrity, we obtained 676 high-quality annotated pairs that were used to evaluate the performance of various state-of-the-art monolingual and multilingual language models. Consequently, we established baseline results that we hope will be useful to the community in all future academic and industrial applications conducted in this context. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 370,406 |
2202.07626 | Random Feature Amplification: Feature Learning and Generalization in
Neural Networks | In this work, we provide a characterization of the feature-learning process in two-layer ReLU networks trained by gradient descent on the logistic loss following random initialization. We consider data with binary labels that are generated by an XOR-like function of the input features. We permit a constant fraction of the training labels to be corrupted by an adversary. We show that, although linear classifiers are no better than random guessing for the distribution we consider, two-layer ReLU networks trained by gradient descent achieve generalization error close to the label noise rate. We develop a novel proof technique that shows that at initialization, the vast majority of neurons function as random features that are only weakly correlated with useful features, and the gradient descent dynamics 'amplify' these weak, random features to strong, useful features. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 280,611 |
2310.20151 | Multi-Agent Consensus Seeking via Large Language Models | Multi-agent systems driven by large language models (LLMs) have shown promising abilities for solving complex tasks in a collaborative manner. This work considers a fundamental problem in multi-agent collaboration: consensus seeking. When multiple agents work together, we are interested in how they can reach a consensus through inter-agent negotiation. To that end, this work studies a consensus-seeking task where the state of each agent is a numerical value and they negotiate with each other to reach a consensus value. It is revealed that when not explicitly directed on which strategy should be adopted, the LLM-driven agents primarily use the average strategy for consensus seeking although they may occasionally use some other strategies. Moreover, this work analyzes the impact of the agent number, agent personality, and network topology on the negotiation process. The findings reported in this work can potentially lay the foundations for understanding the behaviors of LLM-driven multi-agent systems for solving more complex tasks. Furthermore, LLM-driven consensus seeking is applied to a multi-robot aggregation task. This application demonstrates the potential of LLM-driven agents to achieve zero-shot autonomous planning for multi-robot collaboration tasks. Project website: windylab.github.io/ConsensusLLM/. | false | false | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | false | 404,274 |
2501.06189 | A Multimodal Social Agent | In recent years, large language models (LLMs) have demonstrated remarkable progress in common-sense reasoning tasks. This ability is fundamental to understanding social dynamics, interactions, and communication. However, the potential of integrating computers with these social capabilities is still relatively unexplored. However, the potential of integrating computers with these social capabilities is still relatively unexplored. This paper introduces MuSA, a multimodal LLM-based agent that analyzes text-rich social content tailored to address selected human-centric content analysis tasks, such as question answering, visual question answering, title generation, and categorization. It uses planning, reasoning, acting, optimizing, criticizing, and refining strategies to complete a task. Our approach demonstrates that MuSA can automate and improve social content analysis, helping decision-making processes across various applications. We have evaluated our agent's capabilities in question answering, title generation, and content categorization tasks. MuSA performs substantially better than our baselines. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 523,868 |
1607.04436 | A Real-Time Deep Learning Pedestrian Detector for Robot Navigation | A real-time Deep Learning based method for Pedestrian Detection (PD) is applied to the Human-Aware robot navigation problem. The pedestrian detector combines the Aggregate Channel Features (ACF) detector with a deep Convolutional Neural Network (CNN) in order to obtain fast and accurate performance. Our solution is firstly evaluated using a set of real images taken from onboard and offboard cameras and, then, it is validated in a typical robot navigation environment with pedestrians (two distinct experiments are conducted). The results on both tests show that our pedestrian detector is robust and fast enough to be used on robot navigation applications. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 58,618 |
2106.03149 | Large-scale Unsupervised Semantic Segmentation | Empowered by large datasets, e.g., ImageNet, unsupervised learning on large-scale data has enabled significant advances for classification tasks. However, whether the large-scale unsupervised semantic segmentation can be achieved remains unknown. There are two major challenges: i) we need a large-scale benchmark for assessing algorithms; ii) we need to develop methods to simultaneously learn category and shape representation in an unsupervised manner. In this work, we propose a new problem of large-scale unsupervised semantic segmentation (LUSS) with a newly created benchmark dataset to help the research progress. Building on the ImageNet dataset, we propose the ImageNet-S dataset with 1.2 million training images and 50k high-quality semantic segmentation annotations for evaluation. Our benchmark has a high data diversity and a clear task objective. We also present a simple yet effective method that works surprisingly well for LUSS. In addition, we benchmark related un/weakly/fully supervised methods accordingly, identifying the challenges and possible directions of LUSS. The benchmark and source code is publicly available at https://github.com/LUSSeg. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 239,196 |
2103.14167 | COTR: Correspondence Transformer for Matching Across Images | We propose a novel framework for finding correspondences in images based on a deep neural network that, given two images and a query point in one of them, finds its correspondence in the other. By doing so, one has the option to query only the points of interest and retrieve sparse correspondences, or to query all points in an image and obtain dense mappings. Importantly, in order to capture both local and global priors, and to let our model relate between image regions using the most relevant among said priors, we realize our network using a transformer. At inference time, we apply our correspondence network by recursively zooming in around the estimates, yielding a multiscale pipeline able to provide highly-accurate correspondences. Our method significantly outperforms the state of the art on both sparse and dense correspondence problems on multiple datasets and tasks, ranging from wide-baseline stereo to optical flow, without any retraining for a specific dataset. We commit to releasing data, code, and all the tools necessary to train from scratch and ensure reproducibility. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 226,751 |
2409.11219 | Score Forgetting Distillation: A Swift, Data-Free Method for Machine
Unlearning in Diffusion Models | The machine learning community is increasingly recognizing the importance of fostering trust and safety in modern generative AI (GenAI) models. We posit machine unlearning (MU) as a crucial foundation for developing safe, secure, and trustworthy GenAI models. Traditional MU methods often rely on stringent assumptions and require access to real data. This paper introduces Score Forgetting Distillation (SFD), an innovative MU approach that promotes the forgetting of undesirable information in diffusion models by aligning the conditional scores of "unsafe" classes or concepts with those of "safe" ones. To eliminate the need for real data, our SFD framework incorporates a score-based MU loss into the score distillation objective of a pretrained diffusion model. This serves as a regularization term that preserves desired generation capabilities while enabling the production of synthetic data through a one-step generator. Our experiments on pretrained label-conditional and text-to-image diffusion models demonstrate that our method effectively accelerates the forgetting of target classes or concepts during generation, while preserving the quality of other classes or concepts. This unlearned and distilled diffusion not only pioneers a novel concept in MU but also accelerates the generation speed of diffusion models. Our experiments and studies on a range of diffusion models and datasets confirm that our approach is generalizable, effective, and advantageous for MU in diffusion models. (Warning: This paper contains sexually explicit imagery, discussions of pornography, racially-charged terminology, and other content that some readers may find disturbing, distressing, and/or offensive.) | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 489,056 |
2201.03103 | Dual Seminorms, Ergodic Coefficients and Semicontraction Theory | Dynamical systems that are contracting on a subspace are said to be semicontracting. Semicontraction theory is a useful tool in the study of consensus algorithms and dynamical flow systems such as Markov chains. To develop a comprehensive theory of semicontracting systems, we investigate seminorms on vector spaces and define two canonical notions: projection and distance semi-norms. We show that the well-known lp ergodic coefficients are induced matrix seminorms and play a central role in stability problems. In particular, we formulate a duality theorem that explains why the Markov-Dobrushin coefficient is the rate of contraction for both averaging and conservation flows in discrete time. Moreover, we obtain parallel results for induced matrix log seminorms. Finally, we propose comprehensive theorems for strong semicontractivity of linear and non-linear time-varying dynamical systems with invariance and conservation properties both in discrete and continuous time. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 274,747 |
2212.07289 | ConQueR: Query Contrast Voxel-DETR for 3D Object Detection | Although DETR-based 3D detectors can simplify the detection pipeline and achieve direct sparse predictions, their performance still lags behind dense detectors with post-processing for 3D object detection from point clouds. DETRs usually adopt a larger number of queries than GTs (e.g., 300 queries v.s. 40 objects in Waymo) in a scene, which inevitably incur many false positives during inference. In this paper, we propose a simple yet effective sparse 3D detector, named Query Contrast Voxel-DETR (ConQueR), to eliminate the challenging false positives, and achieve more accurate and sparser predictions. We observe that most false positives are highly overlapping in local regions, caused by the lack of explicit supervision to discriminate locally similar queries. We thus propose a Query Contrast mechanism to explicitly enhance queries towards their best-matched GTs over all unmatched query predictions. This is achieved by the construction of positive and negative GT-query pairs for each GT, and a contrastive loss to enhance positive GT-query pairs against negative ones based on feature similarities. ConQueR closes the gap of sparse and dense 3D detectors, and reduces up to ~60% false positives. Our single-frame ConQueR achieves new state-of-the-art (sota) 71.6 mAPH/L2 on the challenging Waymo Open Dataset validation set, outperforming previous sota methods (e.g., PV-RCNN++) by over 2.0 mAPH/L2. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 336,362 |
2402.08570 | Online Foundation Model Selection in Robotics | Foundation models have recently expanded into robotics after excelling in computer vision and natural language processing. The models are accessible in two ways: open-source or paid, closed-source options. Users with access to both face a problem when deciding between effective yet costly closed-source models and free but less powerful open-source alternatives. We call it the model selection problem. Existing supervised-learning methods are impractical due to the high cost of collecting extensive training data from closed-source models. Hence, we focus on the online learning setting where algorithms learn while collecting data, eliminating the need for large pre-collected datasets. We thus formulate a user-centric online model selection problem and propose a novel solution that combines an open-source encoder to output context and an online learning algorithm that processes this context. The encoder distills vast data distributions into low-dimensional features, i.e., the context, without additional training. The online learning algorithm aims to maximize a composite reward that includes model performance, execution time, and costs based on the context extracted from the data. It results in an improved trade-off between selecting open-source and closed-source models compared to non-contextual methods, as validated by our theoretical analysis. Experiments across language-based robotic tasks such as Waymo Open Dataset, ALFRED, and Open X-Embodiment demonstrate real-world applications of the solution. The results show that the solution significantly improves the task success rate by up to 14%. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 429,142 |
1912.08393 | Salient Object Detection with Purificatory Mechanism and Structural
Similarity Loss | By the aid of attention mechanisms to weight the image features adaptively, recent advanced deep learning-based models encourage the predicted results to approximate the ground-truth masks with as large predictable areas as possible, thus achieving the state-of-the-art performance. However, these methods do not pay enough attention to small areas prone to misprediction. In this way, it is still tough to accurately locate salient objects due to the existence of regions with indistinguishable foreground and background and regions with complex or fine structures. To address these problems, we propose a novel convolutional neural network with purificatory mechanism and structural similarity loss. Specifically, in order to better locate preliminary salient objects, we first introduce the promotion attention, which is based on spatial and channel attention mechanisms to promote attention to salient regions. Subsequently, for the purpose of restoring the indistinguishable regions that can be regarded as error-prone regions of one model, we propose the rectification attention, which is learned from the areas of wrong prediction and guide the network to focus on error-prone regions thus rectifying errors. Through these two attentions, we use the Purificatory Mechanism to impose strict weights with different regions of the whole salient objects and purify results from hard-to-distinguish regions, thus accurately predicting the locations and details of salient objects. In addition to paying different attention to these hard-to-distinguish regions, we also consider the structural constraints on complex regions and propose the Structural Similarity Loss. In experiments, the proposed approach outperforms 19 state-of-the-art methods on six datasets with a notable margin at over 27FPS on a single NVIDIA 1080Ti GPU. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 157,825 |
2403.08171 | On Tractable $\Phi$-Equilibria in Non-Concave Games | While Online Gradient Descent and other no-regret learning procedures are known to efficiently converge to a coarse correlated equilibrium in games where each agent's utility is concave in their own strategy, this is not the case when utilities are non-concave -- a common scenario in machine learning applications involving strategies parameterized by deep neural networks, or when agents' utilities are computed by neural networks, or both. Non-concave games introduce significant game-theoretic and optimization challenges: (i) Nash equilibria may not exist; (ii) local Nash equilibria, though existing, are intractable; and (iii) mixed Nash, correlated, and coarse correlated equilibria generally have infinite support and are intractable. To sidestep these challenges, we revisit the classical solution concept of $\Phi$-equilibria introduced by Greenwald and Jafari [2003], which is guaranteed to exist for an arbitrary set of strategy modifications $\Phi$ even in non-concave games [Stoltz and Lugosi, 2007]. However, the tractability of $\Phi$-equilibria in such games remains elusive. In this paper, we initiate the study of tractable $\Phi$-equilibria in non-concave games and examine several natural families of strategy modifications. We show that when $\Phi$ is finite, there exists an efficient uncoupled learning algorithm that converges to the corresponding $\Phi$-equilibria. Additionally, we explore cases where $\Phi$ is infinite but consists of local modifications, showing that Online Gradient Descent can efficiently approximate $\Phi$-equilibria in non-trivial regimes. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 437,208 |
2310.05280 | Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona
Biases in Dialogue Systems | Recent advancements in Large Language Models empower them to follow freeform instructions, including imitating generic or specific demographic personas in conversations. We define generic personas to represent demographic groups, such as "an Asian person", whereas specific personas may take the form of specific popular Asian names like "Yumi". While the adoption of personas enriches user experiences by making dialogue systems more engaging and approachable, it also casts a shadow of potential risk by exacerbating social biases within model responses, thereby causing societal harm through interactions with users. In this paper, we systematically study "persona biases", which we define to be the sensitivity of dialogue models' harmful behaviors contingent upon the personas they adopt. We categorize persona biases into biases in harmful expression and harmful agreement, and establish a comprehensive evaluation framework to measure persona biases in five aspects: Offensiveness, Toxic Continuation, Regard, Stereotype Agreement, and Toxic Agreement. Additionally, we propose to investigate persona biases by experimenting with UNIVERSALPERSONA, a systematically constructed persona dataset encompassing various types of both generic and specific model personas. Through benchmarking on four different models -- including Blender, ChatGPT, Alpaca, and Vicuna -- our study uncovers significant persona biases in dialogue systems. Our findings also underscore the pressing need to revisit the use of personas in dialogue agents to ensure safe application. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 398,075 |
1811.04620 | Depth Image Upsampling based on Guided Filter with Low Gradient
Minimization | In this paper, we present a novel upsampling framework to enhance the spatial resolution of the depth image. In our framework, the upscaling of a low-resolution depth image is guided by a corresponding intensity images, we formulate it as a cost aggregation problem with the guided filter. However, the guided filter does not make full use of the properties of the depth image. Since depth images have quite sparse gradients, it inspires us to regularize the gradients for improving depth upscaling results. Statistics show a special property of depth images, that is, there is a non-ignorable part of pixels whose horizontal or vertical derivatives are equal to $\pm 1$. Considering this special property, we propose a low gradient regularization method which reduces the penalty for horizontal or vertical derivative $\pm1$. The proposed low gradient regularization is integrated with the guided filter into the depth image upsampling method. Experimental results demonstrate the effectiveness of our proposed approach both qualitatively and quantitatively compared with the state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 113,133 |
1503.07723 | Voting Behaviour and Power in Online Democracy: A Study of
LiquidFeedback in Germany's Pirate Party | In recent years, political parties have adopted Online Delegative Democracy platforms such as LiquidFeedback to organise themselves and their political agendas via a grassroots approach. A common objection against the use of these platforms is the delegation system, where a user can delegate his vote to another user, giving rise to so-called super-voters, i.e. powerful users who receive many delegations. It has been asserted in the past that the presence of these super-voters undermines the democratic process, and therefore delegative democracy should be avoided. In this paper, we look at the emergence of super-voters in the largest delegative online democracy platform worldwide, operated by Germany's Pirate Party. We investigate the distribution of power within the party systematically, study whether super-voters exist, and explore the influence they have on the outcome of votings conducted online. While we find that the theoretical power of super-voters is indeed high, we also observe that they use their power wisely. Super-voters do not fully act on their power to change the outcome of votes, but they vote in favour of proposals with the majority of voters in many cases thereby exhibiting a stabilising effect on the system. We use these findings to present a novel class of power indices that considers observed voting biases and gives significantly better predictions than state-of-the-art measures. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 41,510 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.