id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2410.03428 | Research Landscape of the novel emerging field of Cryptoeconomics | A bibliometric literature analysis was conducted to illuminate the evolving and rapidly expanding literature in the field of cryptoeconomics. This analysis presented the emerging field's intellectual, social, and conceptual structure. The intellectual structure, characterized by schools of thought, emerged through a common citation analysis. The social structure revealed collaborations among researchers, identified through a co-authorship analysis. Network analysis highlighted collaborative communities facilitating innovation and knowledge exchange within the field. The conceptual structure was enlightened by analyzing common terms occurring in titles, author keywords, abstracts, and the publication itself. This bibliometric analysis of the rapidly advancing field of cryptoeconomics serves as a foundational resource, providing insights into research productivity and emerging trends. It contributes to a deeper understanding of the field, offering valuable information on research patterns and trends. Furthermore, this analysis empowers researchers, policymakers, and industry sectors to make informed decisions, establish collaborations, and navigate the dynamic and evolving landscape of the cryptoeconomics field. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 494,775 |
2303.00703 | Nearest Neighbors Meet Deep Neural Networks for Point Cloud Analysis | Performances on standard 3D point cloud benchmarks have plateaued, resulting in oversized models and complex network design to make a fractional improvement. We present an alternative to enhance existing deep neural networks without any redesigning or extra parameters, termed as Spatial-Neighbor Adapter (SN-Adapter). Building on any trained 3D network, we utilize its learned encoding capability to extract features of the training dataset and summarize them as prototypical spatial knowledge. For a test point cloud, the SN-Adapter retrieves k nearest neighbors (k-NN) from the pre-constructed spatial prototypes and linearly interpolates the k-NN prediction with that of the original 3D network. By providing complementary characteristics, the proposed SN-Adapter serves as a plug-and-play module to economically improve performance in a non-parametric manner. More importantly, our SN-Adapter can be effectively generalized to various 3D tasks, including shape classification, part segmentation, and 3D object detection, demonstrating its superiority and robustness. We hope our approach could show a new perspective for point cloud analysis and facilitate future research. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 348,684 |
2211.00894 | Mixed Membership Estimation for Weighted Networks | Community detection in overlapping un-weighted networks in which nodes can belong to multiple communities is one of the most popular topics in modern network science during the last decade. However, community detection in overlapping weighted networks in which edge weights can be any real values remains a challenge. In this article, to model overlapping weighted networks with latent community memberships, we propose a generative model called the degree-corrected mixed membership distribution-free model which can be viewed as generalizing several previous models. First, we address the community membership estimation of the proposed model by an application of a spectral algorithm and establish a theoretical guarantee of consistency. We then propose overlapping weighted modularity to measure the quality of overlapping community detection for weighted networks with positive and negative edge weights. To determine the number of communities for weighted networks, we incorporate the algorithm into the overlapping weighted modularity. We demonstrate the advantages of degree-corrected mixed membership distribution-free model and overlapping weighted modularity with applications to simulated data and eleven real-world networks. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 328,042 |
1810.01279 | Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural
Network | We present a new algorithm to train a robust neural network against adversarial attacks. Our algorithm is motivated by the following two ideas. First, although recent work has demonstrated that fusing randomness can improve the robustness of neural networks (Liu 2017), we noticed that adding noise blindly to all the layers is not the optimal way to incorporate randomness. Instead, we model randomness under the framework of Bayesian Neural Network (BNN) to formally learn the posterior distribution of models in a scalable way. Second, we formulate the mini-max problem in BNN to learn the best model distribution under adversarial attacks, leading to an adversarial-trained Bayesian neural net. Experiment results demonstrate that the proposed algorithm achieves state-of-the-art performance under strong attacks. On CIFAR-10 with VGG network, our model leads to 14\% accuracy improvement compared with adversarial training (Madry 2017) and random self-ensemble (Liu 2017) under PGD attack with $0.035$ distortion, and the gap becomes even larger on a subset of ImageNet. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 109,367 |
2005.13312 | AutoSweep: Recovering 3D Editable Objectsfrom a Single Photograph | This paper presents a fully automatic framework for extracting editable 3D objects directly from a single photograph. Unlike previous methods which recover either depth maps, point clouds, or mesh surfaces, we aim to recover 3D objects with semantic parts and can be directly edited. We base our work on the assumption that most human-made objects are constituted by parts and these parts can be well represented by generalized primitives. Our work makes an attempt towards recovering two types of primitive-shaped objects, namely, generalized cuboids and generalized cylinders. To this end, we build a novel instance-aware segmentation network for accurate part separation. Our GeoNet outputs a set of smooth part-level masks labeled as profiles and bodies. Then in a key stage, we simultaneously identify profile-body relations and recover 3D parts by sweeping the recognized profile along their body contour and jointly optimize the geometry to align with the recovered masks. Qualitative and quantitative experiments show that our algorithm can recover high quality 3D models and outperforms existing methods in both instance segmentation and 3D reconstruction. The dataset and code of AutoSweep are available at https://chenxin.tech/AutoSweep.html. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 178,982 |
2406.11142 | Graspness Discovery in Clutters for Fast and Accurate Grasp Detection | Efficient and robust grasp pose detection is vital for robotic manipulation. For general 6 DoF grasping, conventional methods treat all points in a scene equally and usually adopt uniform sampling to select grasp candidates. However, we discover that ignoring where to grasp greatly harms the speed and accuracy of current grasp pose detection methods. In this paper, we propose "graspness", a quality based on geometry cues that distinguishes graspable areas in cluttered scenes. A look-ahead searching method is proposed for measuring the graspness and statistical results justify the rationality of our method. To quickly detect graspness in practice, we develop a neural network named cascaded graspness model to approximate the searching process. Extensive experiments verify the stability, generality and effectiveness of our graspness model, allowing it to be used as a plug-and-play module for different methods. A large improvement in accuracy is witnessed for various previous methods after equipping our graspness model. Moreover, we develop GSNet, an end-to-end network that incorporates our graspness model for early filtering of low-quality predictions. Experiments on a large-scale benchmark, GraspNet-1Billion, show that our method outperforms previous arts by a large margin (30+ AP) and achieves a high inference speed. The library of GSNet has been integrated into AnyGrasp, which is at https://github.com/graspnet/anygrasp_sdk. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 464,736 |
1112.1314 | On Optimal Link Activation with Interference Cancellation in Wireless
Networking | A fundamental aspect in performance engineering of wireless networks is optimizing the set of links that can be concurrently activated to meet given signal-to-interference-and-noise ratio (SINR) thresholds. The solution of this combinatorial problem is the key element in scheduling and cross-layer resource management. Previous works on link activation assume single-user decoding receivers, that treat interference in the same way as noise. In this paper, we assume multiuser decoding receivers, which can cancel strongly interfering signals. As a result, in contrast to classical spatial reuse, links being close to each other are more likely to be active simultaneously. Our goal here is to deliver a comprehensive theoretical and numerical study on optimal link activation under this novel setup, in order to provide insight into the gains from adopting interference cancellation. We therefore consider the optimal problem setting of successive interference cancellation (SIC), as well as the simpler, yet instructive, case of parallel interference cancellation (PIC). We prove that both problems are NP-hard and develop compact integer linear programming formulations that enable us to approach the global optimum solutions. We provide an extensive numerical performance evaluation, indicating that for low to medium SINR thresholds the improvement is quite substantial, especially with SIC, whereas for high SINR thresholds the improvement diminishes and both schemes perform equally well. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 13,338 |
1901.11524 | The Value Function Polytope in Reinforcement Learning | We establish geometric and topological properties of the space of value functions in finite state-action Markov decision processes. Our main contribution is the characterization of the nature of its shape: a general polytope (Aigner et al., 2010). To demonstrate this result, we exhibit several properties of the structural relationship between policies and value functions including the line theorem, which shows that the value functions of policies constrained on all but one state describe a line segment. Finally, we use this novel perspective to introduce visualizations to enhance the understanding of the dynamics of reinforcement learning algorithms. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 120,283 |
2410.01599 | Towards Model Discovery Using Domain Decomposition and PINNs | We enhance machine learning algorithms for learning model parameters in complex systems represented by ordinary differential equations (ODEs) with domain decomposition methods. The study evaluates the performance of two approaches, namely (vanilla) Physics-Informed Neural Networks (PINNs) and Finite Basis Physics-Informed Neural Networks (FBPINNs), in learning the dynamics of test models with a quasi-stationary longtime behavior. We test the approaches for data sets in different dynamical regions and with varying noise level. As results, we find a better performance for the FBPINN approach compared to the vanilla PINN approach, even in cases with data from only a quasi-stationary time domain with few dynamics. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 493,844 |
2007.12597 | Decision-Making in Driver-Automation Shared Control: A Review and
Perspectives | Shared control schemes allow a human driver to work with an automated driving agent in driver-vehicle systems while retaining the driver's abilities to control. The human driver, as an essential agent in the driver-vehicle shared control systems, should be precisely modeled regarding their cognitive processes, control strategies, and decision-making processes. The interactive strategy design between drivers and automated driving agents brings an excellent challenge for human-centric driver assistance systems due to the inherent characteristics of humans. Many open-ended questions arise, such as what proper role of human drivers should act in a shared control scheme? How to make an intelligent decision capable of balancing the benefits of agents in shared control systems? Due to the advent of these attentions and questions, it is desirable to present a survey on the decision-making between human drivers and highly automated vehicles, to understand their architectures, human driver modeling, and interaction strategies under the driver-vehicle shared schemes. Finally, we give a further discussion on the key future challenges and opportunities. They are likely to shape new potential research directions. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 188,869 |
1409.4489 | Distributed Rate Adaptation and Power Control in Fading Multiple Access
Channels | Traditionally, the capacity region of a coherent fading multiple access channel (MAC) is analyzed in two popular contexts. In the first, a centralized system with full channel state information at the transmitters (CSIT) is assumed, and the communication parameters like transmit power and data-rate are jointly chosen for every fading vector realization. On the other hand, in fast-fading links with distributed CSIT, the lack of full CSI is compensated by performing ergodic averaging over sufficiently many channel realizations. Notice that the distributed CSI may necessitate decentralized power-control for optimal data-transfer. Apart from these two models, the case of slow-fading links and distributed CSIT, though relevant to many systems, has received much less attention. In this paper, a block-fading AWGN MAC with full CSI at the receiver and distributed CSI at the transmitters is considered. The links undergo independent fading, but otherwise have arbitrary fading distributions. The channel statistics and respective long-term average transmit powers are known to all parties. We first consider the case where each encoder has knowledge only of its own link quality, and not of others. For this model, we compute the adaptive capacity region, i.e. the collection of average rate-tuples under block-wise coding/decoding such that the rate-tuple for every fading realization is inside the instantaneous MAC capacity region. The key step in our solution is an optimal rate allocation function for any given set of distributed power control laws at the transmitters. This also allows us to characterize the optimal power control for a wide class of fading models. Further extensions are also proposed to account for more general CSI availability at the transmitters. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 36,080 |
2112.06351 | Neural Point Process for Learning Spatiotemporal Event Dynamics | Learning the dynamics of spatiotemporal events is a fundamental problem. Neural point processes enhance the expressivity of point process models with deep neural networks. However, most existing methods only consider temporal dynamics without spatial modeling. We propose Deep Spatiotemporal Point Process (\ours{}), a deep dynamics model that integrates spatiotemporal point processes. Our method is flexible, efficient, and can accurately forecast irregularly sampled events over space and time. The key construction of our approach is the nonparametric space-time intensity function, governed by a latent process. The intensity function enjoys closed form integration for the density. The latent process captures the uncertainty of the event sequence. We use amortized variational inference to infer the latent process with deep networks. Using synthetic datasets, we validate our model can accurately learn the true intensity function. On real-world benchmark datasets, our model demonstrates superior performance over state-of-the-art baselines. Our code and data can be found at the https://github.com/Rose-STL-Lab/DeepSTPP. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 271,142 |
1212.5461 | Interactive Ant Colony Optimisation (iACO) for Early Lifecycle Software
Design | Software design is crucial to successful software development, yet is a demanding multi-objective problem for software engineers. In an attempt to assist the software designer, interactive (i.e. human in-the-loop) meta-heuristic search techniques such as evolutionary computing have been applied and show promising results. Recent investigations have also shown that Ant Colony Optimization (ACO) can outperform evolutionary computing as a potential search engine for interactive software design. With a limited computational budget, ACO produces superior candidate design solutions in a smaller number of iterations. Building on these findings, we propose a novel interactive ACO (iACO) approach to assist the designer in early lifecycle software design, in which the search is steered jointly by subjective designer evaluation as well as machine fitness functions relating the structural integrity and surrogate elegance of software designs. Results show that iACO is speedy, responsive and highly effective in enabling interactive, dynamic multi-objective search in early lifecycle software design. Study participants rate the iACO search experience as compelling. Results of machine learning of fitness measure weightings indicate that software design elegance does indeed play a significant role in designer evaluation of candidate software design. We conclude that the evenness of the number of attributes and methods among classes (NAC) is a significant surrogate elegance measure, which in turn suggests that this evenness of distribution, when combined with structural integrity, is an implicit but crucial component of effective early lifecycle software design. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 20,557 |
1710.11344 | A Sequential Matching Framework for Multi-turn Response Selection in
Retrieval-based Chatbots | We study the problem of response selection for multi-turn conversation in retrieval-based chatbots. The task requires matching a response candidate with a conversation context, whose challenges include how to recognize important parts of the context, and how to model the relationships among utterances in the context. Existing matching methods may lose important information in contexts as we can interpret them with a unified framework in which contexts are transformed to fixed-length vectors without any interaction with responses before matching. The analysis motivates us to propose a new matching framework that can sufficiently carry the important information in contexts to matching and model the relationships among utterances at the same time. The new framework, which we call a sequential matching framework (SMF), lets each utterance in a context interacts with a response candidate at the first step and transforms the pair to a matching vector. The matching vectors are then accumulated following the order of the utterances in the context with a recurrent neural network (RNN) which models the relationships among the utterances. The context-response matching is finally calculated with the hidden states of the RNN. Under SMF, we propose a sequential convolutional network and sequential attention network and conduct experiments on two public data sets to test their performance. Experimental results show that both models can significantly outperform the state-of-the-art matching methods. We also show that the models are interpretable with visualizations that provide us insights on how they capture and leverage the important information in contexts for matching. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 83,576 |
2206.05683 | APT-36K: A Large-scale Benchmark for Animal Pose Estimation and Tracking | Animal pose estimation and tracking (APT) is a fundamental task for detecting and tracking animal keypoints from a sequence of video frames. Previous animal-related datasets focus either on animal tracking or single-frame animal pose estimation, and never on both aspects. The lack of APT datasets hinders the development and evaluation of video-based animal pose estimation and tracking methods, limiting real-world applications, e.g., understanding animal behavior in wildlife conservation. To fill this gap, we make the first step and propose APT-36K, i.e., the first large-scale benchmark for animal pose estimation and tracking. Specifically, APT-36K consists of 2,400 video clips collected and filtered from 30 animal species with 15 frames for each video, resulting in 36,000 frames in total. After manual annotation and careful double-check, high-quality keypoint and tracking annotations are provided for all the animal instances. Based on APT-36K, we benchmark several representative models on the following three tracks: (1) supervised animal pose estimation on a single frame under intra- and inter-domain transfer learning settings, (2) inter-species domain generalization test for unseen animals, and (3) animal pose estimation with animal tracking. Based on the experimental results, we gain some empirical insights and show that APT-36K provides a valuable animal pose estimation and tracking benchmark, offering new challenges and opportunities for future research. The code and dataset will be made publicly available at https://github.com/pandorgan/APT-36K. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 302,094 |
2212.02081 | YolOOD: Utilizing Object Detection Concepts for Multi-Label
Out-of-Distribution Detection | Out-of-distribution (OOD) detection has attracted a large amount of attention from the machine learning research community in recent years due to its importance in deployed systems. Most of the previous studies focused on the detection of OOD samples in the multi-class classification task. However, OOD detection in the multi-label classification task, a more common real-world use case, remains an underexplored domain. In this research, we propose YolOOD - a method that utilizes concepts from the object detection domain to perform OOD detection in the multi-label classification task. Object detection models have an inherent ability to distinguish between objects of interest (in-distribution) and irrelevant objects (e.g., OOD objects) in images that contain multiple objects belonging to different class categories. These abilities allow us to convert a regular object detection model into an image classifier with inherent OOD detection capabilities with just minor changes. We compare our approach to state-of-the-art OOD detection methods and demonstrate YolOOD's ability to outperform these methods on a comprehensive suite of in-distribution and OOD benchmark datasets. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 334,678 |
2404.08069 | Persistent Classification: A New Approach to Stability of Data and
Adversarial Examples | There are a number of hypotheses underlying the existence of adversarial examples for classification problems. These include the high-dimensionality of the data, high codimension in the ambient space of the data manifolds of interest, and that the structure of machine learning models may encourage classifiers to develop decision boundaries close to data points. This article proposes a new framework for studying adversarial examples that does not depend directly on the distance to the decision boundary. Similarly to the smoothed classifier literature, we define a (natural or adversarial) data point to be $(\gamma,\sigma)$-stable if the probability of the same classification is at least $\gamma$ for points sampled in a Gaussian neighborhood of the point with a given standard deviation $\sigma$. We focus on studying the differences between persistence metrics along interpolants of natural and adversarial points. We show that adversarial examples have significantly lower persistence than natural examples for large neural networks in the context of the MNIST and ImageNet datasets. We connect this lack of persistence with decision boundary geometry by measuring angles of interpolants with respect to decision boundaries. Finally, we connect this approach with robustness by developing a manifold alignment gradient metric and demonstrating the increase in robustness that can be achieved when training with the addition of this metric. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 446,098 |
2307.01540 | Learning to Prompt in the Classroom to Understand AI Limits: A pilot
study | Artificial intelligence's (AI) progress holds great promise in tackling pressing societal concerns such as health and climate. Large Language Models (LLM) and the derived chatbots, like ChatGPT, have highly improved the natural language processing capabilities of AI systems allowing them to process an unprecedented amount of unstructured data. However, the ensuing excitement has led to negative sentiments, even as AI methods demonstrate remarkable contributions (e.g. in health and genetics). A key factor contributing to this sentiment is the misleading perception that LLMs can effortlessly provide solutions across domains, ignoring their limitations such as hallucinations and reasoning constraints. Acknowledging AI fallibility is crucial to address the impact of dogmatic overconfidence in possibly erroneous suggestions generated by LLMs. At the same time, it can reduce fear and other negative attitudes toward AI. This necessitates comprehensive AI literacy interventions that educate the public about LLM constraints and effective usage techniques, i.e prompting strategies. With this aim, a pilot educational intervention was performed in a high school with 21 students. It involved presenting high-level concepts about intelligence, AI, and LLMs, followed by practical exercises involving ChatGPT in creating natural educational conversations and applying established prompting strategies. Encouraging preliminary results emerged, including high appreciation of the activity, improved interaction quality with the LLM, reduced negative AI sentiments, and a better grasp of limitations, specifically unreliability, limited understanding of commands leading to unsatisfactory responses, and limited presentation flexibility. Our aim is to explore AI acceptance factors and refine this approach for more controlled future studies. | true | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 377,389 |
1511.04670 | Uncovering Temporal Context for Video Question and Answering | In this work, we introduce Video Question Answering in temporal domain to infer the past, describe the present and predict the future. We present an encoder-decoder approach using Recurrent Neural Networks to learn temporal structures of videos and introduce a dual-channel ranking loss to answer multiple-choice questions. We explore approaches for finer understanding of video content using question form of "fill-in-the-blank", and managed to collect 109,895 video clips with duration over 1,000 hours from TACoS, MPII-MD, MEDTest 14 datasets, while the corresponding 390,744 questions are generated from annotations. Extensive experiments demonstrate that our approach significantly outperforms the compared baselines. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 48,929 |
2002.03761 | Music2Dance: DanceNet for Music-driven Dance Generation | Synthesize human motions from music, i.e., music to dance, is appealing and attracts lots of research interests in recent years. It is challenging due to not only the requirement of realistic and complex human motions for dance, but more importantly, the synthesized motions should be consistent with the style, rhythm and melody of the music. In this paper, we propose a novel autoregressive generative model, DanceNet, to take the style, rhythm and melody of music as the control signals to generate 3D dance motions with high realism and diversity. To boost the performance of our proposed model, we capture several synchronized music-dance pairs by professional dancers, and build a high-quality music-dance pair dataset. Experiments have demonstrated that the proposed method can achieve the state-of-the-art results. | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 163,394 |
2309.11011 | OCC-VO: Dense Mapping via 3D Occupancy-Based Visual Odometry for
Autonomous Driving | Visual Odometry (VO) plays a pivotal role in autonomous systems, with a principal challenge being the lack of depth information in camera images. This paper introduces OCC-VO, a novel framework that capitalizes on recent advances in deep learning to transform 2D camera images into 3D semantic occupancy, thereby circumventing the traditional need for concurrent estimation of ego poses and landmark locations. Within this framework, we utilize the TPV-Former to convert surround view cameras' images into 3D semantic occupancy. Addressing the challenges presented by this transformation, we have specifically tailored a pose estimation and mapping algorithm that incorporates Semantic Label Filter, Dynamic Object Filter, and finally, utilizes Voxel PFilter for maintaining a consistent global semantic map. Evaluations on the Occ3D-nuScenes not only showcase a 20.6% improvement in Success Ratio and a 29.6% enhancement in trajectory accuracy against ORB-SLAM3, but also emphasize our ability to construct a comprehensive map. Our implementation is open-sourced and available at: https://github.com/USTCLH/OCC-VO. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 393,238 |
2002.06885 | What is Trending on Wikipedia? Capturing Trends and Language Biases
Across Wikipedia Editions | In this work, we propose an automatic evaluation and comparison of the browsing behavior of Wikipedia readers that can be applied to any language editions of Wikipedia. As an example, we focus on English, French, and Russian languages during the last four months of 2018. The proposed method has three steps. Firstly, it extracts the most trending articles over a chosen period of time. Secondly, it performs a semi-supervised topic extraction and thirdly, it compares topics across languages. The automated processing works with the data that combines Wikipedia's graph of hyperlinks, pageview statistics and summaries of the pages. The results show that people share a common interest and curiosity for entertainment, e.g. movies, music, sports independently of their language. Differences appear in topics related to local events or about cultural particularities. Interactive visualizations showing clusters of trending pages in each language edition are available online https://wiki-insights.epfl.ch/wikitrends | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 164,334 |
2304.10824 | Rethinking Benchmarks for Cross-modal Image-text Retrieval | Image-text retrieval, as a fundamental and important branch of information retrieval, has attracted extensive research attentions. The main challenge of this task is cross-modal semantic understanding and matching. Some recent works focus more on fine-grained cross-modal semantic matching. With the prevalence of large scale multimodal pretraining models, several state-of-the-art models (e.g. X-VLM) have achieved near-perfect performance on widely-used image-text retrieval benchmarks, i.e. MSCOCO-Test-5K and Flickr30K-Test-1K. In this paper, we review the two common benchmarks and observe that they are insufficient to assess the true capability of models on fine-grained cross-modal semantic matching. The reason is that a large amount of images and texts in the benchmarks are coarse-grained. Based on the observation, we renovate the coarse-grained images and texts in the old benchmarks and establish the improved benchmarks called MSCOCO-FG and Flickr30K-FG. Specifically, on the image side, we enlarge the original image pool by adopting more similar images. On the text side, we propose a novel semi-automatic renovation approach to refine coarse-grained sentences into finer-grained ones with little human effort. Furthermore, we evaluate representative image-text retrieval models on our new benchmarks to demonstrate the effectiveness of our method. We also analyze the capability of models on fine-grained semantic comprehension through extensive experiments. The results show that even the state-of-the-art models have much room for improvement in fine-grained semantic understanding, especially in distinguishing attributes of close objects in images. Our code and improved benchmark datasets are publicly available at: https://github.com/cwj1412/MSCOCO-Flikcr30K_FG, which we hope will inspire further in-depth research on cross-modal retrieval. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 359,571 |
2404.03048 | Decentralised Moderation for Interoperable Social Networks: A
Conversation-based Approach for Pleroma and the Fediverse | The recent development of decentralised and interoperable social networks (such as the "fediverse") creates new challenges for content moderators. This is because millions of posts generated on one server can easily "spread" to another, even if the recipient server has very different moderation policies. An obvious solution would be to leverage moderation tools to automatically tag (and filter) posts that contravene moderation policies, e.g. related to toxic speech. Recent work has exploited the conversational context of a post to improve this automatic tagging, e.g. using the replies to a post to help classify if it contains toxic speech. This has shown particular potential in environments with large training sets that contain complete conversations. This, however, creates challenges in a decentralised context, as a single conversation may be fragmented across multiple servers. Thus, each server only has a partial view of an entire conversation because conversations are often federated across servers in a non-synchronized fashion. To address this, we propose a decentralised conversation-aware content moderation approach suitable for the fediverse. Our approach employs a graph deep learning model (GraphNLI) trained locally on each server. The model exploits local data to train a model that combines post and conversational information captured through random walks to detect toxicity. We evaluate our approach with data from Pleroma, a major decentralised and interoperable micro-blogging network containing 2 million conversations. Our model effectively detects toxicity on larger instances, exclusively trained using their local post information (0.8837 macro-F1). Our approach has considerable scope to improve moderation in decentralised and interoperable social networks such as Pleroma or Mastodon. | false | false | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | 444,093 |
2407.03885 | Perception-Guided Quality Metric of 3D Point Clouds Using Hybrid
Strategy | Full-reference point cloud quality assessment (FR-PCQA) aims to infer the quality of distorted point clouds with available references. Most of the existing FR-PCQA metrics ignore the fact that the human visual system (HVS) dynamically tackles visual information according to different distortion levels (i.e., distortion detection for high-quality samples and appearance perception for low-quality samples) and measure point cloud quality using unified features. To bridge the gap, in this paper, we propose a perception-guided hybrid metric (PHM) that adaptively leverages two visual strategies with respect to distortion degree to predict point cloud quality: to measure visible difference in high-quality samples, PHM takes into account the masking effect and employs texture complexity as an effective compensatory factor for absolute difference; on the other hand, PHM leverages spectral graph theory to evaluate appearance degradation in low-quality samples. Variations in geometric signals on graphs and changes in the spectral graph wavelet coefficients are utilized to characterize geometry and texture appearance degradation, respectively. Finally, the results obtained from the two components are combined in a non-linear method to produce an overall quality score of the tested point cloud. The results of the experiment on five independent databases show that PHM achieves state-of-the-art (SOTA) performance and offers significant performance improvement in multiple distortion environments. The code is publicly available at https://github.com/zhangyujie-1998/PHM. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 470,328 |
1911.00928 | Novel Attacks against Contingency Analysis in Power Grids | Contingency Analysis (CA) is a core component of the Energy Management System (EMS) in the power grid. The goal of CA is to operate the power system in a secure manner by analyzing the system subject to a contingency (e.g., the outage of a transmission line or a power generator) to determine the setpoints that will allow system operation without violation of constraints. The analysis in CA is conducted based on the output from State Estimation (SE), another core EMS module. However, it is also shown that an adversary can alter certain power measurements to corrupt the system states estimated by SE without being detected. Such a corrupted estimation can severely skew the results of the contingency analysis as it will provide a fake model to deal with. In this research, we formally model necessary interdependency relationships and systematically analyze these novel attacks on the contingency analysis. In particular, this research focuses on Security Constrained Optimal Power Flow (SCOPF) that finds out the optimal economic dispatches considering a single line failure (based on the $n - 1$ contingency analysis) and transmission line capacities. The proposed model is implemented and solved to find out potential threat vectors (i.e., a set of measurements to be altered) that can evade CA so that the system will face overloading situation on one or more transmission lines when some specific contingencies happen. We demonstrate our formal model on an IEEE 14 bus system-based case study and verify the results with a standard PowerWorld model. We further evaluate the model with respect to various attacks and grid characteristics. | false | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | true | 151,958 |
1504.05740 | When Do WOM Codes Improve the Erasure Factor in Flash Memories? | Flash memory is a write-once medium in which reprogramming cells requires first erasing the block that contains them. The lifetime of the flash is a function of the number of block erasures and can be as small as several thousands. To reduce the number of block erasures, pages, which are the smallest write unit, are rewritten out-of-place in the memory. A Write-once memory (WOM) code is a coding scheme which enables to write multiple times to the block before an erasure. However, these codes come with significant rate loss. For example, the rate for writing twice (with the same rate) is at most 0.77. In this paper, we study WOM codes and their tradeoff between rate loss and reduction in the number of block erasures, when pages are written uniformly at random. First, we introduce a new measure, called erasure factor, that reflects both the number of block erasures and the amount of data that can be written on each block. A key point in our analysis is that this tradeoff depends upon the specific implementation of WOM codes in the memory. We consider two systems that use WOM codes; a conventional scheme that was commonly used, and a new recent design that preserves the overall storage capacity. While the first system can improve the erasure factor only when the storage rate is at most 0.6442, we show that the second scheme always improves this figure of merit. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 42,311 |
1710.05426 | Causal Rule Sets for Identifying Subgroups with Enhanced Treatment
Effect | A key question in causal inference analyses is how to find subgroups with elevated treatment effects. This paper takes a machine learning approach and introduces a generative model, Causal Rule Sets (CRS), for interpretable subgroup discovery. A CRS model uses a small set of short decision rules to capture a subgroup where the average treatment effect is elevated. We present a Bayesian framework for learning a causal rule set. The Bayesian model consists of a prior that favors simple models for better interpretability as well as avoiding overfitting, and a Bayesian logistic regression that captures the likelihood of data, characterizing the relation between outcomes, attributes, and subgroup membership. The Bayesian model has tunable parameters that can characterize subgroups with various sizes, providing users with more flexible choices of models from the \emph{treatment efficient frontier}. We find maximum a posteriori models using iterative discrete Monte Carlo steps in the joint solution space of rules sets and parameters. To improve search efficiency, we provide theoretically grounded heuristics and bounding strategies to prune and confine the search space. Experiments show that the search algorithm can efficiently recover true underlying subgroups. We apply CRS on public and real-world datasets from domains where interpretability is indispensable. We compare CRS with state-of-the-art rule-based subgroup discovery models. Results show that CRS achieved consistently competitive performance on datasets from various domains, represented by high treatment efficient frontiers. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 82,641 |
2403.09383 | Pantypes: Diverse Representatives for Self-Explainable Models | Prototypical self-explainable classifiers have emerged to meet the growing demand for interpretable AI systems. These classifiers are designed to incorporate high transparency in their decisions by basing inference on similarity with learned prototypical objects. While these models are designed with diversity in mind, the learned prototypes often do not sufficiently represent all aspects of the input distribution, particularly those in low density regions. Such lack of sufficient data representation, known as representation bias, has been associated with various detrimental properties related to machine learning diversity and fairness. In light of this, we introduce pantypes, a new family of prototypical objects designed to capture the full diversity of the input distribution through a sparse set of objects. We show that pantypes can empower prototypical self-explainable models by occupying divergent regions of the latent space and thus fostering high diversity, interpretability and fairness. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 437,744 |
2409.03760 | Rethinking Deep Learning: Propagating Information in Neural Networks
without Backpropagation and Statistical Optimization | Developing strong AI signifies the arrival of technological singularity, contributing greatly to advancing human civilization and resolving social issues. Neural networks (NNs) and deep learning, which utilize NNs, are expected to lead to strong AI due to their biological neural system-mimicking structures. However, the statistical weight optimization techniques commonly used, such as error backpropagation and loss functions, may hinder the mimicry of neural systems. This study discusses the information propagation capabilities and potential practical applications of NNs as neural system mimicking structures by solving the handwritten character recognition problem in the Modified National Institute of Standards and Technology (MNIST) database without using statistical weight optimization techniques like error backpropagation. In this study, the NNs architecture comprises fully connected layers using step functions as activation functions, with 0-15 hidden layers, and no weight updates. The accuracy is calculated by comparing the average output vectors of the training data for each label with the output vectors of the test data, based on vector similarity. The results showed that the maximum accuracy achieved is around 80%. This indicates that NNs can propagate information correctly without using statistical weight optimization. Additionally, the accuracy decreased with an increasing number of hidden layers. This is attributed to the decrease in the variance of the output vectors as the number of hidden layers increases, suggesting that the output data becomes smooth. This study's NNs and accuracy calculation methods are simple and have room for various improvements. Moreover, creating a feedforward NNs that repeatedly cycles through 'input -> processing -> output -> environmental response -> input -> ...' could pave the way for practical software applications. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 486,160 |
2411.09730 | SureMap: Simultaneous Mean Estimation for Single-Task and Multi-Task
Disaggregated Evaluation | Disaggregated evaluation -- estimation of performance of a machine learning model on different subpopulations -- is a core task when assessing performance and group-fairness of AI systems. A key challenge is that evaluation data is scarce, and subpopulations arising from intersections of attributes (e.g., race, sex, age) are often tiny. Today, it is common for multiple clients to procure the same AI model from a model developer, and the task of disaggregated evaluation is faced by each customer individually. This gives rise to what we call the multi-task disaggregated evaluation problem, wherein multiple clients seek to conduct a disaggregated evaluation of a given model in their own data setting (task). In this work we develop a disaggregated evaluation method called SureMap that has high estimation accuracy for both multi-task and single-task disaggregated evaluations of blackbox models. SureMap's efficiency gains come from (1) transforming the problem into structured simultaneous Gaussian mean estimation and (2) incorporating external data, e.g., from the AI system creator or from their other clients. Our method combines maximum a posteriori (MAP) estimation using a well-chosen prior together with cross-validation-free tuning via Stein's unbiased risk estimate (SURE). We evaluate SureMap on disaggregated evaluation tasks in multiple domains, observing significant accuracy improvements over several strong competitors. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 508,350 |
1403.6367 | A Framework for Hybrid Systems with Denial-of-Service Security Attack | Hybrid systems are integrations of discrete computation and continuous physical evolution. The physical components of such systems introduce safety requirements, the achievement of which asks for the correct monitoring and control from the discrete controllers. However, due to denial-of-service security attack, the expected information from the controllers is not received and as a consequence the physical systems may fail to behave as expected. This paper proposes a formal framework for expressing denial-of-service security attack in hybrid systems. As a virtue, a physical system is able to plan for reasonable behavior in case the ideal control fails due to unreliable communication, in such a way that the safety of the system upon denial-of-service is still guaranteed. In the context of the modeling language, we develop an inference system for verifying safety of hybrid systems, without putting any assumptions on how the environments behave. Based on the inference system, we implement an interactive theorem prover and have applied it to check an example taken from train control system. | false | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | true | 31,817 |
1001.1597 | The Berlekamp-Massey Algorithm via Minimal Polynomials | We present a recursive minimal polynomial theorem for finite sequences over a commutative integral domain $D$. This theorem is relative to any element of $D$. The ingredients are: the arithmetic of Laurent polynomials over $D$, a recursive 'index function' and simple mathematical induction. Taking reciprocals gives a 'Berlekamp-Massey theorem' i.e. a recursive construction of the polynomials arising in the Berlekamp-Massey algorithm, relative to any element of $D$. The recursive theorem readily yields the iterative minimal polynomial algorithm due to the author and a transparent derivation of the iterative Berlekamp-Massey algorithm. We give an upper bound for the sum of the linear complexities of $s$ which is tight if $s$ has a perfect linear complexity profile. This implies that over a field, both iterative algorithms require at most $2\lfloor \frac{n^2}{4}\rfloor$ multiplications. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 5,310 |
1512.05990 | Deformable Distributed Multiple Detector Fusion for Multi-Person
Tracking | This paper addresses fully automated multi-person tracking in complex environments with challenging occlusion and extensive pose variations. Our solution combines multiple detectors for a set of different regions of interest (e.g., full-body and head) for multi-person tracking. The use of multiple detectors leads to fewer miss detections as it is able to exploit the complementary strengths of the individual detectors. While the number of false positives may increase with the increased number of bounding boxes detected from multiple detectors, we propose to group the detection outputs by bounding box location and depth information. For robustness to significant pose variations, deformable spatial relationship between detectors are learnt in our multi-person tracking system. On RGBD data from a live Intensive Care Unit (ICU), we show that the proposed method significantly improves multi-person tracking performance over state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 50,272 |
2407.11047 | An open source Multi-Agent Deep Reinforcement Learning Routing Simulator
for satellite networks | This paper introduces an open source simulator for packet routing in Low Earth Orbit Satellite Constellations (LSatCs) considering the dynamic system uncertainties. The simulator, implemented in Python, supports traditional Dijkstra's based routing as well as more advanced learning solutions, specifically Q-Routing and Multi-Agent Deep Reinforcement Learning (MA-DRL) from our previous work. It uses an event-based approach with the SimPy module to accurately simulate packet creation, routing and queuing, providing real-time tracking of queues and latency. The simulator is highly configurable, allowing adjustments in routing policies, traffic, ground and space layer topologies, communication parameters, and learning hyperparameters. Key features include the ability to visualize system motion and track packet paths. Results highlight significant improvements in end-to-end (E2E) latency using Reinforcement Learning (RL)-based routing policies compared to traditional methods. The source code, the documentation and a Jupyter notebook with post-processing results and analysis are available on GitHub. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 473,267 |
1906.02295 | Progressive NAPSAC: sampling from gradually growing neighborhoods | We propose Progressive NAPSAC, P-NAPSAC in short, which merges the advantages of local and global sampling by drawing samples from gradually growing neighborhoods. Exploiting the fact that nearby points are more likely to originate from the same geometric model, P-NAPSAC finds local structures earlier than global samplers. We show that the progressive spatial sampling in P-NAPSAC can be integrated with PROSAC sampling, which is applied to the first, location-defining, point. P-NAPSAC is embedded in USAC, a state-of-the-art robust estimation pipeline, which we further improve by implementing its local optimization as in Graph-Cut RANSAC. We call the resulting estimator USAC*. The method is tested on homography and fundamental matrix fitting on a total of 10,691 models from seven publicly available datasets. USAC* with P-NAPSAC outperforms reference methods in terms of speed on all problems. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 133,997 |
2112.01736 | Gesture Recognition with a Skeleton-Based Keyframe Selection Module | We propose a bidirectional consecutively connected two-pathway network (BCCN) for efficient gesture recognition. The BCCN consists of two pathways: (i) a keyframe pathway and (ii) a temporal-attention pathway. The keyframe pathway is configured using the skeleton-based keyframe selection module. Keyframes pass through the pathway to extract the spatial feature of itself, and the temporal-attention pathway extracts temporal semantics. Our model improved gesture recognition performance in videos and obtained better activation maps for spatial and temporal properties. Tests were performed on the Chalearn dataset, the ETRI-Activity 3D dataset, and the Toyota Smart Home dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 269,594 |
2307.05663 | Objaverse-XL: A Universe of 10M+ 3D Objects | Natural language processing and 2D vision models have attained remarkable proficiency on many tasks primarily by escalating the scale of training data. However, 3D vision tasks have not seen the same progress, in part due to the challenges of acquiring high-quality 3D data. In this work, we present Objaverse-XL, a dataset of over 10 million 3D objects. Our dataset comprises deduplicated 3D objects from a diverse set of sources, including manually designed objects, photogrammetry scans of landmarks and everyday items, and professional scans of historic and antique artifacts. Representing the largest scale and diversity in the realm of 3D datasets, Objaverse-XL enables significant new possibilities for 3D vision. Our experiments demonstrate the improvements enabled with the scale provided by Objaverse-XL. We show that by training Zero123 on novel view synthesis, utilizing over 100 million multi-view rendered images, we achieve strong zero-shot generalization abilities. We hope that releasing Objaverse-XL will enable further innovations in the field of 3D vision at scale. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 378,824 |
2004.04674 | Fisher Discriminant Triplet and Contrastive Losses for Training Siamese
Networks | Siamese neural network is a very powerful architecture for both feature extraction and metric learning. It usually consists of several networks that share weights. The Siamese concept is topology-agnostic and can use any neural network as its backbone. The two most popular loss functions for training these networks are the triplet and contrastive loss functions. In this paper, we propose two novel loss functions, named Fisher Discriminant Triplet (FDT) and Fisher Discriminant Contrastive (FDC). The former uses anchor-neighbor-distant triplets while the latter utilizes pairs of anchor-neighbor and anchor-distant samples. The FDT and FDC loss functions are designed based on the statistical formulation of the Fisher Discriminant Analysis (FDA), which is a linear subspace learning method. Our experiments on the MNIST and two challenging and publicly available histopathology datasets show the effectiveness of the proposed loss functions. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 171,955 |
2305.12147 | LogiCoT: Logical Chain-of-Thought Instruction-Tuning | Generative Pre-trained Transformer 4 (GPT-4) demonstrates impressive chain-of-thought reasoning ability. Recent work on self-instruction tuning, such as Alpaca, has focused on enhancing the general proficiency of models. These instructions enable the model to achieve performance comparable to GPT-3.5 on general tasks like open-domain text generation and paraphrasing. However, they fall short of helping the model handle complex reasoning tasks. To bridge the gap, this paper presents LogiCoT, a new instruction-tuning dataset for Logical Chain-of-Thought reasoning with GPT-4. We elaborate on the process of harvesting instructions for prompting GPT-4 to generate chain-of-thought rationales. LogiCoT serves as an instruction set for teaching models of logical reasoning and elicits general reasoning skills. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 365,863 |
2404.11996 | DST-GTN: Dynamic Spatio-Temporal Graph Transformer Network for Traffic
Forecasting | Accurate traffic forecasting is essential for effective urban planning and congestion management. Deep learning (DL) approaches have gained colossal success in traffic forecasting but still face challenges in capturing the intricacies of traffic dynamics. In this paper, we identify and address this challenges by emphasizing that spatial features are inherently dynamic and change over time. A novel in-depth feature representation, called Dynamic Spatio-Temporal (Dyn-ST) features, is introduced, which encapsulates spatial characteristics across varying times. Moreover, a Dynamic Spatio-Temporal Graph Transformer Network (DST-GTN) is proposed by capturing Dyn-ST features and other dynamic adjacency relations between intersections. The DST-GTN can model dynamic ST relationships between nodes accurately and refine the representation of global and local ST characteristics by adopting adaptive weights in low-pass and all-pass filters, enabling the extraction of Dyn-ST features from traffic time-series data. Through numerical experiments on public datasets, the DST-GTN achieves state-of-the-art performance for a range of traffic forecasting tasks and demonstrates enhanced stability. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 447,695 |
2405.15524 | Polyp Segmentation Generalisability of Pretrained Backbones | It has recently been demonstrated that pretraining backbones in a self-supervised manner generally provides better fine-tuned polyp segmentation performance, and that models with ViT-B backbones typically perform better than models with ResNet50 backbones. In this paper, we extend this recent work to consider generalisability. I.e., we assess the performance of models on a different dataset to that used for fine-tuning, accounting for variation in network architecture and pretraining pipeline (algorithm and dataset). This reveals how well models with different pretrained backbones generalise to data of a somewhat different distribution to the training data, which will likely arise in deployment due to different cameras and demographics of patients, amongst other factors. We observe that the previous findings, regarding pretraining pipelines for polyp segmentation, hold true when considering generalisability. However, our results imply that models with ResNet50 backbones typically generalise better, despite being outperformed by models with ViT-B backbones in evaluation on the test set from the same dataset used for fine-tuning. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 456,979 |
0908.4464 | The eel-like robot | The aim of this project is to design, study and build an "eel-like robot" prototype able to swim in three dimensions. The study is based on the analysis of eel swimming and results in the realization of a prototype with 12 vertebrae, a skin and a head with two fins. To reach these objectives, a multidisciplinary group of teams and laboratories has been formed in the framework of two French projects. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 4,369 |
2407.02963 | Subspace Coding for Spatial Sensing | A subspace code is defined as a collection of subspaces of an ambient vector space, where each information-encoding codeword is a subspace. This paper studies a class of spatial sensing problems, notably direction of arrival (DoA) estimation using multisensor arrays, from a novel subspace coding perspective. Specifically, we demonstrate how a canonical (passive) sensing model can be mapped into a subspace coding problem, with the sensing operation defining a unique structure for the subspace codewords. We introduce the concept of sensing subspace codes following this structure, and show how these codes can be controlled by judiciously designing the sensor array geometry. We further present a construction of sensing subspace codes leveraging a certain class of Golomb rulers that achieve near-optimal minimum codeword distance. These designs inspire novel noise-robust sparse array geometries achieving high angular resolution. We also prove that codes corresponding to conventional uniform linear arrays are suboptimal in this regard. This work is the first to establish connections between subspace coding and spatial sensing, with the aim of leveraging insights and methodologies in one field to tackle challenging problems in the other. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 469,965 |
2201.05256 | DapStep: Deep Assignee Prediction for Stack Trace Error rePresentation | The task of finding the best developer to fix a bug is called bug triage. Most of the existing approaches consider the bug triage task as a classification problem, however, classification is not appropriate when the sets of classes change over time (as developers often do in a project). Furthermore, to the best of our knowledge, all the existing models use textual sources of information, i.e., bug descriptions, which are not always available. In this work, we explore the applicability of existing solutions for the bug triage problem when stack traces are used as the main data source of bug reports. Additionally, we reformulate this task as a ranking problem and propose new deep learning models to solve it. The models are based on a bidirectional recurrent neural network with attention and on a convolutional neural network, with the weights of the models optimized using a ranking loss function. To improve the quality of ranking, we propose using additional information from version control system annotations. Two approaches are proposed for extracting features from annotations: manual and using an additional neural network. To evaluate our models, we collected two datasets of real-world stack traces. Our experiments show that the proposed models outperform existing models adapted to handle stack traces. To facilitate further research in this area, we publish the source code of our models and one of the collected datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 275,333 |
cs/0504052 | Learning Multi-Class Neural-Network Models from Electroencephalograms | We describe a new algorithm for learning multi-class neural-network models from large-scale clinical electroencephalograms (EEGs). This algorithm trains hidden neurons separately to classify all the pairs of classes. To find best pairwise classifiers, our algorithm searches for input variables which are relevant to the classification problem. Despite patient variability and heavily overlapping classes, a 16-class model learnt from EEGs of 65 sleeping newborns correctly classified 80.8% of the training and 80.1% of the testing examples. Additionally, the neural-network model provides a probabilistic interpretation of decisions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 538,663 |
2403.18684 | Scaling Laws For Dense Retrieval | Scaling up neural models has yielded significant advancements in a wide array of tasks, particularly in language generation. Previous studies have found that the performance of neural models frequently adheres to predictable scaling laws, correlated with factors such as training set size and model size. This insight is invaluable, especially as large-scale experiments grow increasingly resource-intensive. Yet, such scaling law has not been fully explored in dense retrieval due to the discrete nature of retrieval metrics and complex relationships between training data and model sizes in retrieval tasks. In this study, we investigate whether the performance of dense retrieval models follows the scaling law as other neural models. We propose to use contrastive log-likelihood as the evaluation metric and conduct extensive experiments with dense retrieval models implemented with different numbers of parameters and trained with different amounts of annotated data. Results indicate that, under our settings, the performance of dense retrieval models follows a precise power-law scaling related to the model size and the number of annotations. Additionally, we examine scaling with prevalent data augmentation methods to assess the impact of annotation quality, and apply the scaling law to find the best resource allocation strategy under a budget constraint. We believe that these insights will significantly contribute to understanding the scaling effect of dense retrieval models and offer meaningful guidance for future research endeavors. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 442,037 |
2302.14442 | City-scale Pollution Aware Traffic Routing by Sampling Max Flows using
MCMC | A significant cause of air pollution in urban areas worldwide is the high volume of road traffic. Long-term exposure to severe pollution can cause serious health issues. One approach towards tackling this problem is to design a pollution-aware traffic routing policy that balances multiple objectives of i) avoiding extreme pollution in any area ii) enabling short transit times, and iii) making effective use of the road capacities. We propose a novel sampling-based approach for this problem. We provide the first construction of a Markov Chain that can sample integer max flow solutions of a planar graph, with theoretical guarantees that the probabilities depend on the aggregate transit length. We designed a traffic policy using diverse samples and simulated traffic on real-world road maps using the SUMO traffic simulator. We observe a considerable decrease in areas with severe pollution when experimented with maps of large cities across the world compared to other approaches. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 348,301 |
2304.05060 | SPIRiT-Diffusion: Self-Consistency Driven Diffusion Model for
Accelerated MRI | Diffusion models have emerged as a leading methodology for image generation and have proven successful in the realm of magnetic resonance imaging (MRI) reconstruction. However, existing reconstruction methods based on diffusion models are primarily formulated in the image domain, making the reconstruction quality susceptible to inaccuracies in coil sensitivity maps (CSMs). k-space interpolation methods can effectively address this issue but conventional diffusion models are not readily applicable in k-space interpolation. To overcome this challenge, we introduce a novel approach called SPIRiT-Diffusion, which is a diffusion model for k-space interpolation inspired by the iterative self-consistent SPIRiT method. Specifically, we utilize the iterative solver of the self-consistent term (i.e., k-space physical prior) in SPIRiT to formulate a novel stochastic differential equation (SDE) governing the diffusion process. Subsequently, k-space data can be interpolated by executing the diffusion process. This innovative approach highlights the optimization model's role in designing the SDE in diffusion models, enabling the diffusion process to align closely with the physics inherent in the optimization model, a concept referred to as model-driven diffusion. We evaluated the proposed SPIRiT-Diffusion method using a 3D joint intracranial and carotid vessel wall imaging dataset. The results convincingly demonstrate its superiority over image-domain reconstruction methods, achieving high reconstruction quality even at a substantial acceleration rate of 10. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 357,470 |
1910.03162 | A Physics-Based Attack Detection Technique in Cyber-Physical Systems: A
Model Predictive Control Co-Design Approach | In this paper a novel approach to co-design controller and attack detector for nonlinear cyber-physical systems affected by false data injection (FDI) attack is proposed. We augment the model predictive controller with an additional constraint requiring the future---in some steps ahead---trajectory of the system to remain in some time-invariant neighborhood of a properly designed reference trajectory. At any sampling time, we compare the real-time trajectory of the system with the designed reference trajectory, and construct a residual. The residual is then used in a nonparametric cumulative sum (CUSUM) anomaly detector to uncover FDI attacks on input and measurement channels. The effectiveness of the proposed approach is tested with a nonlinear model regarding level control of coupled tanks. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 148,430 |
2409.01022 | SINET: Sparsity-driven Interpretable Neural Network for Underwater Image
Enhancement | Improving the quality of underwater images is essential for advancing marine research and technology. This work introduces a sparsity-driven interpretable neural network (SINET) for the underwater image enhancement (UIE) task. Unlike pure deep learning methods, our network architecture is based on a novel channel-specific convolutional sparse coding (CCSC) model, ensuring good interpretability of the underlying image enhancement process. The key feature of SINET is that it estimates the salient features from the three color channels using three sparse feature estimation blocks (SFEBs). The architecture of SFEB is designed by unrolling an iterative algorithm for solving the $\ell_1$ regulaized convolutional sparse coding (CSC) problem. Our experiments show that SINET surpasses state-of-the-art PSNR value by $1.05$ dB with $3873$ times lower computational complexity. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 485,179 |
2204.00193 | Epipolar Focus Spectrum: A Novel Light Field Representation and
Application in Dense-view Reconstruction | Existing light field representations, such as epipolar plane image (EPI) and sub-aperture images, do not consider the structural characteristics across the views, so they usually require additional disparity and spatial structure cues for follow-up tasks. Besides, they have difficulties dealing with occlusions or larger disparity scenes. To this end, this paper proposes a novel Epipolar Focus Spectrum (EFS) representation by rearranging the EPI spectrum. Different from the classical EPI representation where an EPI line corresponds to a specific depth, there is a one-to-one mapping from the EFS line to the view. Accordingly, compared to a sparsely-sampled light field, a densely-sampled one with the same field of view (FoV) leads to a more compact distribution of such linear structures in the double-cone-shaped region with the identical opening angle in its corresponding EFS. Hence the EFS representation is invariant to the scene depth. To demonstrate its effectiveness, we develop a trainable EFS-based pipeline for light field reconstruction, where a dense light field can be reconstructed by compensating the "missing EFS lines" given a sparse light field, yielding promising results with cross-view consistency, especially in the presence of severe occlusion and large disparity. Experimental results on both synthetic and real-world datasets demonstrate the validity and superiority of the proposed method over SOTA methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 289,176 |
2105.05135 | kdehumor at semeval-2020 task 7: a neural network model for detecting
funniness in dataset humicroedit | This paper describes our contribution to SemEval-2020 Task 7: Assessing Humor in Edited News Headlines. Here we present a method based on a deep neural network. In recent years, quite some attention has been devoted to humor production and perception. Our team KdeHumor employs recurrent neural network models including Bi-Directional LSTMs (BiLSTMs). Moreover, we utilize the state-of-the-art pre-trained sentence embedding techniques. We analyze the performance of our method and demonstrate the contribution of each component of our architecture. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 234,729 |
1701.00879 | PlatEMO: A MATLAB Platform for Evolutionary Multi-Objective Optimization | Over the last three decades, a large number of evolutionary algorithms have been developed for solving multiobjective optimization problems. However, there lacks an up-to-date and comprehensive software platform for researchers to properly benchmark existing algorithms and for practitioners to apply selected algorithms to solve their real-world problems. The demand of such a common tool becomes even more urgent, when the source code of many proposed algorithms has not been made publicly available. To address these issues, we have developed a MATLAB platform for evolutionary multi-objective optimization in this paper, called PlatEMO, which includes more than 50 multi-objective evolutionary algorithms and more than 100 multi-objective test problems, along with several widely used performance indicators. With a user-friendly graphical user interface, PlatEMO enables users to easily compare several evolutionary algorithms at one time and collect statistical results in Excel or LaTeX files. More importantly, PlatEMO is completely open source, such that users are able to develop new algorithms on the basis of it. This paper introduces the main features of PlatEMO and illustrates how to use it for performing comparative experiments, embedding new algorithms, creating new test problems, and developing performance indicators. Source code of PlatEMO is now available at: http://bimk.ahu.edu.cn/index.php?s=/Index/Software/index.html. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 66,330 |
1712.05644 | graphTPP: A multivariate based method for interactive graph layout and
analysis | Graph layout is the process of creating a visual representation of a graph through a node-link diagram. Node-attribute graphs have additional data stored on the nodes which describe certain properties of the nodes called attributes. Typical force-directed representations often produce hairball-like structures that neither aid in understanding the graph's topology nor the relationship to its attributes. The aim of this research was to investigate the use of node-attributes for graph layout in order to improve the analysis process and to give further insight into the graph over purely topological layouts. In this article we present graphTPP, a graph based extension to targeted projection pursuit (TPP) --- an interactive, linear, dimension reduction technique --- as a method for graph layout and subsequent further analysis. TPP allows users to control the projection and is optimised for clustering. Three case studies were conducted in the areas of influence graphs, network security, and citation networks. In each case graphTPP was shown to outperform standard force-directed techniques and even other dimension reduction methods in terms of clarity of clustered structure in the layout, the association between the structure and the attributes and the insights elicited in each domain area. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 86,755 |
2401.04935 | Learning Audio Concepts from Counterfactual Natural Language | Conventional audio classification relied on predefined classes, lacking the ability to learn from free-form text. Recent methods unlock learning joint audio-text embeddings from raw audio-text pairs describing audio in natural language. Despite recent advancements, there is little exploration of systematic methods to train models for recognizing sound events and sources in alternative scenarios, such as distinguishing fireworks from gunshots at outdoor events in similar situations. This study introduces causal reasoning and counterfactual analysis in the audio domain. We use counterfactual instances and include them in our model across different aspects. Our model considers acoustic characteristics and sound source information from human-annotated reference texts. To validate the effectiveness of our model, we conducted pre-training utilizing multiple audio captioning datasets. We then evaluate with several common downstream tasks, demonstrating the merits of the proposed method as one of the first works leveraging counterfactual information in audio domain. Specifically, the top-1 accuracy in open-ended language-based audio retrieval task increased by more than 43%. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | 420,594 |
1910.06428 | Restoration of marker occluded hematoxylin and eosin stained whole slide
histology images using generative adversarial networks | It is common for pathologists to annotate specific regions of the tissue, such as tumor, directly on the glass slide with markers. Although this practice was helpful prior to the advent of histology whole slide digitization, it often occludes important details which are increasingly relevant to immuno-oncology due to recent advancements in digital pathology imaging techniques. The current work uses a generative adversarial network with cycle loss to remove these annotations while still maintaining the underlying structure of the tissue by solving an image-to-image translation problem. We train our network on up to 300 whole slide images with marker inks and show that 70% of the corrected image patches are indistinguishable from originally uncontaminated image tissue to a human expert. This portion increases 97% when we replace the human expert with a deep residual network. We demonstrated the fidelity of the method to the original image by calculating the correlation between image gradient magnitudes. We observed a revival of up to 94,000 nuclei per slide in our dataset, the majority of which were located on tissue border. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 149,333 |
2307.11867 | Large-Scale Multi-Fleet Platoon Coordination: A Dynamic Programming
Approach | Truck platooning is a promising technology that enables trucks to travel in formations with small inter-vehicle distances for improved aerodynamics and fuel economy. The real-world transportation system includes a vast number of trucks owned by different fleet owners, for example, carriers. To fully exploit the benefits of platooning, efficient dispatching strategies that facilitate the platoon formations across fleets are required. This paper presents a distributed framework for addressing multi-fleet platoon coordination in large transportation networks, where each truck has a fixed route and aims to maximize its own fleet's platooning profit by scheduling its waiting times at hubs. The waiting time scheduling problem of individual trucks is formulated as a distributed optimal control problem with continuous decision space and a reward function that takes non-zero values only at discrete points. By suitably discretizing the decision and state spaces, we show that the problem can be solved exactly by dynamic programming, without loss of optimality. Finally, a realistic simulation study is conducted over the Swedish road network with $5,000$ trucks to evaluate the profit and efficiency of the approach. The simulation study shows that, compared to single-fleet platooning, multi-fleet platooning provided by our method achieves around $15$ times higher monetary profit and increases the CO$_2$ emission reductions from $0.4\%$ to $5.5\%$. In addition, it shows that the developed approach can be carried out in real-time and thus is suitable for platoon coordination in large transportation systems. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 381,059 |
1505.02973 | Comparing methods for Twitter Sentiment Analysis | This work extends the set of works which deal with the popular problem of sentiment analysis in Twitter. It investigates the most popular document ("tweet") representation methods which feed sentiment evaluation mechanisms. In particular, we study the bag-of-words, n-grams and n-gram graphs approaches and for each of them we evaluate the performance of a lexicon-based and 7 learning-based classification algorithms (namely SVM, Na\"ive Bayesian Networks, Logistic Regression, Multilayer Perceptrons, Best-First Trees, Functional Trees and C4.5) as well as their combinations, using a set of 4451 manually annotated tweets. The results demonstrate the superiority of learning-based methods and in particular of n-gram graphs approaches for predicting the sentiment of tweets. They also show that the combinatory approach has impressive effects on n-grams, raising the confidence up to 83.15% on the 5-Grams, using majority vote and a balanced dataset (equal number of positive, negative and neutral tweets for training). In the n-gram graph cases the improvement was small to none, reaching 94.52% on the 4-gram graphs, using Orthodromic distance and a threshold of 0.001. | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 43,021 |
2212.01197 | FedALA: Adaptive Local Aggregation for Personalized Federated Learning | A key challenge in federated learning (FL) is the statistical heterogeneity that impairs the generalization of the global model on each client. To address this, we propose a method Federated learning with Adaptive Local Aggregation (FedALA) by capturing the desired information in the global model for client models in personalized FL. The key component of FedALA is an Adaptive Local Aggregation (ALA) module, which can adaptively aggregate the downloaded global model and local model towards the local objective on each client to initialize the local model before training in each iteration. To evaluate the effectiveness of FedALA, we conduct extensive experiments with five benchmark datasets in computer vision and natural language processing domains. FedALA outperforms eleven state-of-the-art baselines by up to 3.27% in test accuracy. Furthermore, we also apply ALA module to other federated learning methods and achieve up to 24.19% improvement in test accuracy. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 334,341 |
2311.13878 | Minimizing Factual Inconsistency and Hallucination in Large Language
Models | Large Language Models (LLMs) are widely used in critical fields such as healthcare, education, and finance due to their remarkable proficiency in various language-related tasks. However, LLMs are prone to generating factually incorrect responses or "hallucinations," which can lead to a loss of credibility and trust among users. To address this issue, we propose a multi-stage framework that generates the rationale first, verifies and refines incorrect ones, and uses them as supporting references to generate the answer. The generated rationale enhances the transparency of the answer and our framework provides insights into how the model arrived at this answer, by using this rationale and the references to the context. In this paper, we demonstrate its effectiveness in improving the quality of responses to drug-related inquiries in the life sciences industry. Our framework improves traditional Retrieval Augmented Generation (RAG) by enabling OpenAI GPT-3.5-turbo to be 14-25% more faithful and 16-22% more accurate on two datasets. Furthermore, fine-tuning samples based on our framework improves the accuracy of smaller open-access LLMs by 33-42% and competes with RAG on commercial models. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 409,909 |
1909.09577 | NeMo: a toolkit for building AI applications using Neural Modules | NeMo (Neural Modules) is a Python framework-agnostic toolkit for creating AI applications through re-usability, abstraction, and composition. NeMo is built around neural modules, conceptual blocks of neural networks that take typed inputs and produce typed outputs. Such modules typically represent data layers, encoders, decoders, language models, loss functions, or methods of combining activations. NeMo makes it easy to combine and re-use these building blocks while providing a level of semantic correctness checking via its neural type system. The toolkit comes with extendable collections of pre-built modules for automatic speech recognition and natural language processing. Furthermore, NeMo provides built-in support for distributed training and mixed precision on latest NVIDIA GPUs. NeMo is open-source https://github.com/NVIDIA/NeMo | false | false | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 146,297 |
2112.10775 | HarmoFL: Harmonizing Local and Global Drifts in Federated Learning on
Heterogeneous Medical Images | Multiple medical institutions collaboratively training a model using federated learning (FL) has become a promising solution for maximizing the potential of data-driven models, yet the non-independent and identically distributed (non-iid) data in medical images is still an outstanding challenge in real-world practice. The feature heterogeneity caused by diverse scanners or protocols introduces a drift in the learning process, in both local (client) and global (server) optimizations, which harms the convergence as well as model performance. Many previous works have attempted to address the non-iid issue by tackling the drift locally or globally, but how to jointly solve the two essentially coupled drifts is still unclear. In this work, we concentrate on handling both local and global drifts and introduce a new harmonizing framework called HarmoFL. First, we propose to mitigate the local update drift by normalizing amplitudes of images transformed into the frequency domain to mimic a unified imaging setting, in order to generate a harmonized feature space across local clients. Second, based on harmonized features, we design a client weight perturbation guiding each local model to reach a flat optimum, where a neighborhood area of the local optimal solution has a uniformly low loss. Without any extra communication cost, the perturbation assists the global model to optimize towards a converged optimal solution by aggregating several local flat optima. We have theoretically analyzed the proposed method and empirically conducted extensive experiments on three medical image classification and segmentation tasks, showing that HarmoFL outperforms a set of recent state-of-the-art methods with promising convergence behavior. Code is available at https://github.com/med-air/HarmoFL. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 272,530 |
2007.02758 | Sentiment Polarity Detection on Bengali Book Reviews Using Multinomial
Naive Bayes | Recently, sentiment polarity detection has increased attention to NLP researchers due to the massive availability of customer's opinions or reviews in the online platform. Due to the continued expansion of e-commerce sites, the rate of purchase of various products, including books, are growing enormously among the people. Reader's opinions/reviews affect the buying decision of a customer in most cases. This work introduces a machine learning-based technique to determine sentiment polarities (either positive or negative category) from Bengali book reviews. To assess the effectiveness of the proposed technique, a corpus with 2000 reviews on Bengali books is developed. A comparative analysis with various approaches (such as logistic regression, naive Bayes, SVM, and SGD) also performed by taking into consideration of the unigram, bigram, and trigram features, respectively. Experimental result reveals that the multinomial Naive Bayes with unigram feature outperforms the other techniques with 84% accuracy on the test set. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 185,846 |
1512.02752 | A Novel Regularized Principal Graph Learning Framework on Explicit Graph
Representation | Many scientific datasets are of high dimension, and the analysis usually requires visual manipulation by retaining the most important structures of data. Principal curve is a widely used approach for this purpose. However, many existing methods work only for data with structures that are not self-intersected, which is quite restrictive for real applications. A few methods can overcome the above problem, but they either require complicated human-made rules for a specific task with lack of convergence guarantee and adaption flexibility to different tasks, or cannot obtain explicit structures of data. To address these issues, we develop a new regularized principal graph learning framework that captures the local information of the underlying graph structure based on reversed graph embedding. As showcases, models that can learn a spanning tree or a weighted undirected $\ell_1$ graph are proposed, and a new learning algorithm is developed that learns a set of principal points and a graph structure from data, simultaneously. The new algorithm is simple with guaranteed convergence. We then extend the proposed framework to deal with large-scale data. Experimental results on various synthetic and six real world datasets show that the proposed method compares favorably with baselines and can uncover the underlying structure correctly. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 49,966 |
2107.01303 | Data-driven mapping between functional connectomes using optimal
transport | Functional connectomes derived from functional magnetic resonance imaging have long been used to understand the functional organization of the brain. Nevertheless, a connectome is intrinsically linked to the atlas used to create it. In other words, a connectome generated from one atlas is different in scale and resolution compared to a connectome generated from another atlas. Being able to map connectomes and derived results between different atlases without additional pre-processing is a crucial step in improving interpretation and generalization between studies that use different atlases. Here, we use optimal transport, a powerful mathematical technique, to find an optimum mapping between two atlases. This mapping is then used to transform time series from one atlas to another in order to reconstruct a connectome. We validate our approach by comparing transformed connectomes against their "gold-standard" counterparts (i.e., connectomes generated directly from an atlas) and demonstrate the utility of transformed connectomes by applying these connectomes to predictive models based on a different atlas. We show that these transformed connectomes are significantly similar to their "gold-standard" counterparts and maintain individual differences in brain-behavior associations, demonstrating both the validity of our approach and its utility in downstream analyses. Overall, our approach is a promising avenue to increase the generalization of connectome-based results across different atlases. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 244,437 |
2406.18930 | Reasoning About Action and Change | The purpose of this book is to provide an overview of AI research, ranging from basic work to interfaces and applications, with as much emphasis on results as on current issues. It is aimed at an audience of master students and Ph.D. students, and can be of interest as well for researchers and engineers who want to know more about AI. The book is split into three volumes. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 468,239 |
1803.00969 | Energy Efficiency of Opportunistic Device-to-Device Relaying Under
Lognormal Shadowing | Energy consumption is a major limitation of low power and mobile devices. Efficient transmission protocols are required to minimize an energy consumption of the mobile devices for ubiquitous connectivity in the next generation wireless networks. Opportunistic schemes select a single relay using the criteria of the best channel and achieve a near-optimal diversity performance in a cooperative wireless system. In this paper, we study the energy efficiency of the opportunistic schemes for device-to-device communication. In the opportunistic approach, an energy consumed by devices is minimized by selecting a single neighboring device as a relay using the criteria of minimum consumed energy in each transmission in the uplink of a wireless network. We derive analytical bounds and scaling laws on the expected energy consumption when the devices experience log-normal shadowing with respect to a base station considering both the transmission as well as circuit energy consumptions. We show that the protocol improves the energy efficiency of the network comparing to the direct transmission even if only a few devices are considered for relaying. We also demonstrate the effectiveness of the protocol by means of simulations in realistic scenarios of the wireless network. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 91,788 |
2403.05557 | Re-thinking Human Activity Recognition with Hierarchy-aware Label
Relationship Modeling | Human Activity Recognition (HAR) has been studied for decades, from data collection, learning models, to post-processing and result interpretations. However, the inherent hierarchy in the activities remains relatively under-explored, despite its significant impact on model performance and interpretation. In this paper, we propose H-HAR, by rethinking the HAR tasks from a fresh perspective by delving into their intricate global label relationships. Rather than building multiple classifiers separately for multi-layered activities, we explore the efficacy of a flat model enhanced with graph-based label relationship modeling. Being hierarchy-aware, the graph-based label modeling enhances the fundamental HAR model, by incorporating intricate label relationships into the model. We validate the proposal with a multi-label classifier on complex human activity data. The results highlight the advantages of the proposal, which can be vertically integrated into advanced HAR models to further enhance their performances. | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 436,056 |
2108.02095 | Human-In-The-Loop Document Layout Analysis | Document layout analysis (DLA) aims to divide a document image into different types of regions. DLA plays an important role in the document content understanding and information extraction systems. Exploring a method that can use less data for effective training contributes to the development of DLA. We consider a Human-in-the-loop (HITL) collaborative intelligence in the DLA. Our approach was inspired by the fact that the HITL push the model to learn from the unknown problems by adding a small amount of data based on knowledge. The HITL select key samples by using confidence. However, using confidence to find key samples is not suitable for DLA tasks. We propose the Key Samples Selection (KSS) method to find key samples in high-level tasks (semantic segmentation) more accurately through agent collaboration, effectively reducing costs. Once selected, these key samples are passed to human beings for active labeling, then the model will be updated with the labeled samples. Hence, we revisited the learning system from reinforcement learning and designed a sample-based agent update strategy, which effectively improves the agent's ability to accept new samples. It achieves significant improvement results in two benchmarks (DSSE-200 (from 77.1% to 86.3%) and CS-150 (from 88.0% to 95.6%)) by using 10% of labeled data. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 249,220 |
2203.10609 | A Novel Transparency Strategy-based Data Augmentation Approach for
BI-RADS Classification of Mammograms | Image augmentation techniques have been widely investigated to improve the performance of deep learning (DL) algorithms on mammography classification tasks. Recent methods have proved the efficiency of image augmentation on data deficiency or data imbalance issues. In this paper, we propose a novel transparency strategy to boost the Breast Imaging Reporting and Data System (BI-RADS) scores of mammogram classifiers. The proposed approach utilizes the Region of Interest (ROI) information to generate more high-risk training examples for breast cancer (BI-RADS 3, 4, 5) from original images. Our extensive experiments on three different datasets show that the proposed approach significantly improves the mammogram classification performance and surpasses a state-of-the-art data augmentation technique called CutMix. This study also highlights that our transparency method is more effective than other augmentation strategies for BI-RADS classification and can be widely applied to other computer vision tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 286,608 |
2008.01059 | Improving One-stage Visual Grounding by Recursive Sub-query Construction | We improve one-stage visual grounding by addressing current limitations on grounding long and complex queries. Existing one-stage methods encode the entire language query as a single sentence embedding vector, e.g., taking the embedding from BERT or the hidden state from LSTM. This single vector representation is prone to overlooking the detailed descriptions in the query. To address this query modeling deficiency, we propose a recursive sub-query construction framework, which reasons between image and query for multiple rounds and reduces the referring ambiguity step by step. We show our new one-stage method obtains 5.0%, 4.5%, 7.5%, 12.8% absolute improvements over the state-of-the-art one-stage baseline on ReferItGame, RefCOCO, RefCOCO+, and RefCOCOg, respectively. In particular, superior performances on longer and more complex queries validates the effectiveness of our query modeling. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 190,191 |
2409.13672 | Recent Advances in Non-convex Smoothness Conditions and Applicability to
Deep Linear Neural Networks | The presence of non-convexity in smooth optimization problems arising from deep learning have sparked new smoothness conditions in the literature and corresponding convergence analyses. We discuss these smoothness conditions, order them, provide conditions for determining whether they hold, and evaluate their applicability to training a deep linear neural network for binary classification. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 490,092 |
2301.01424 | Scene Synthesis from Human Motion | Large-scale capture of human motion with diverse, complex scenes, while immensely useful, is often considered prohibitively costly. Meanwhile, human motion alone contains rich information about the scene they reside in and interact with. For example, a sitting human suggests the existence of a chair, and their leg position further implies the chair's pose. In this paper, we propose to synthesize diverse, semantically reasonable, and physically plausible scenes based on human motion. Our framework, Scene Synthesis from HUMan MotiON (SUMMON), includes two steps. It first uses ContactFormer, our newly introduced contact predictor, to obtain temporally consistent contact labels from human motion. Based on these predictions, SUMMON then chooses interacting objects and optimizes physical plausibility losses; it further populates the scene with objects that do not interact with humans. Experimental results demonstrate that SUMMON synthesizes feasible, plausible, and diverse scenes and has the potential to generate extensive human-scene interaction data for the community. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 339,233 |
1906.10048 | SurReal: Fr\'echet Mean and Distance Transform for Complex-Valued Deep
Learning | We develop a novel deep learning architecture for naturally complex-valued data, which is often subject to complex scaling ambiguity. We treat each sample as a field in the space of complex numbers. With the polar form of a complex-valued number, the general group that acts in this space is the product of planar rotation and non-zero scaling. This perspective allows us to develop not only a novel convolution operator using weighted Fr\'echet mean (wFM) on a Riemannian manifold, but also a novel fully connected layer operator using the distance to the wFM, with natural equivariant properties to non-zero scaling and planar rotation for the former and invariance properties for the latter. Compared to the baseline approach of learning real-valued neural network models on the two-channel real-valued representation of complex-valued data, our method achieves surreal performance on two publicly available complex-valued datasets: MSTAR on SAR images and RadioML on radio frequency signals. On MSTAR, at 8% of the baseline model size and with fewer than 45,000 parameters, our model improves the target classification accuracy from 94% to 98% on this highly imbalanced dataset. On RadioML, our model achieves comparable RF modulation classification accuracy at 10% of the baseline model size. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 136,343 |
2205.03153 | Bridging the Domain Gap for Stance Detection for the Zulu language | Misinformation has become a major concern in recent last years given its spread across our information sources. In the past years, many NLP tasks have been introduced in this area, with some systems reaching good results on English language datasets. Existing AI based approaches for fighting misinformation in literature suggest automatic stance detection as an integral first step to success. Our paper aims at utilizing this progress made for English to transfers that knowledge into other languages, which is a non-trivial task due to the domain gap between English and the target languages. We propose a black-box non-intrusive method that utilizes techniques from Domain Adaptation to reduce the domain gap, without requiring any human expertise in the target language, by leveraging low-quality data in both a supervised and unsupervised manner. This allows us to rapidly achieve similar results for stance detection for the Zulu language, the target language in this work, as are found for English. We also provide a stance detection dataset in the Zulu language. Our experimental results show that by leveraging English datasets and machine translation we can increase performances on both English data along with other languages. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 295,187 |
2303.05208 | Geometry of Language | In this article, we present a fresh perspective on language, combining ideas from various sources, but mixed in a new synthesis. As in the minimalist program, the question is whether we can formulate an elegant formalism, a universal grammar or a mechanism which explains significant aspects of the human faculty of language, which in turn can be considered a natural disposition for the evolution and deployment of the diverse human languages. We describe such a mechanism, which differs from existing logical and grammatical approaches by its geometric nature. Our main contribution is to explore the assumption that sentence recognition takes place by forming chains of tokens representing words, followed by matching these chains with pre-existing chains representing grammatical word orders. The aligned chains of tokens give rise to two- and three-dimensional complexes. The resulting model gives an alternative presentation for subtle rules, traditionally formalized using categorial grammar. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 350,386 |
1005.5516 | On the Fly Query Entity Decomposition Using Snippets | One of the most important issues in Information Retrieval is inferring the intents underlying users' queries. Thus, any tool to enrich or to better contextualized queries can proof extremely valuable. Entity extraction, provided it is done fast, can be one of such tools. Such techniques usually rely on a prior training phase involving large datasets. That training is costly, specially in environments which are increasingly moving towards real time scenarios where latency to retrieve fresh informacion should be minimal. In this paper an `on-the-fly' query decomposition method is proposed. It uses snippets which are mined by means of a na\"ive statistical algorithm. An initial evaluation of such a method is provided, in addition to a discussion on its applicability to different scenarios. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 6,611 |
2406.13781 | A Primal-Dual Framework for Transformers and Neural Networks | Self-attention is key to the remarkable success of transformers in sequence modeling tasks including many applications in natural language processing and computer vision. Like neural network layers, these attention mechanisms are often developed by heuristics and experience. To provide a principled framework for constructing attention layers in transformers, we show that the self-attention corresponds to the support vector expansion derived from a support vector regression problem, whose primal formulation has the form of a neural network layer. Using our framework, we derive popular attention layers used in practice and propose two new attentions: 1) the Batch Normalized Attention (Attention-BN) derived from the batch normalization layer and 2) the Attention with Scaled Head (Attention-SH) derived from using less training data to fit the SVR model. We empirically demonstrate the advantages of the Attention-BN and Attention-SH in reducing head redundancy, increasing the model's accuracy, and improving the model's efficiency in a variety of practical applications including image and time-series classification. | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | false | 465,996 |
2401.13537 | Masked Particle Modeling on Sets: Towards Self-Supervised High Energy
Physics Foundation Models | We propose masked particle modeling (MPM) as a self-supervised method for learning generic, transferable, and reusable representations on unordered sets of inputs for use in high energy physics (HEP) scientific data. This work provides a novel scheme to perform masked modeling based pre-training to learn permutation invariant functions on sets. More generally, this work provides a step towards building large foundation models for HEP that can be generically pre-trained with self-supervised learning and later fine-tuned for a variety of down-stream tasks. In MPM, particles in a set are masked and the training objective is to recover their identity, as defined by a discretized token representation of a pre-trained vector quantized variational autoencoder. We study the efficacy of the method in samples of high energy jets at collider physics experiments, including studies on the impact of discretization, permutation invariance, and ordering. We also study the fine-tuning capability of the model, showing that it can be adapted to tasks such as supervised and weakly supervised jet classification, and that the model can transfer efficiently with small fine-tuning data sets to new classes and new data domains. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 423,759 |
2402.08320 | The Paradox of Motion: Evidence for Spurious Correlations in
Skeleton-based Gait Recognition Models | Gait, an unobtrusive biometric, is valued for its capability to identify individuals at a distance, across external outfits and environmental conditions. This study challenges the prevailing assumption that vision-based gait recognition, in particular skeleton-based gait recognition, relies primarily on motion patterns, revealing a significant role of the implicit anthropometric information encoded in the walking sequence. We show through a comparative analysis that removing height information leads to notable performance degradation across three models and two benchmarks (CASIA-B and GREW). Furthermore, we propose a spatial transformer model processing individual poses, disregarding any temporal information, which achieves unreasonably good accuracy, emphasizing the bias towards appearance information and indicating spurious correlations in existing benchmarks. These findings underscore the need for a nuanced understanding of the interplay between motion and appearance in vision-based gait recognition, prompting a reevaluation of the methodological assumptions in this field. Our experiments indicate that "in-the-wild" datasets are less prone to spurious correlations, prompting the need for more diverse and large scale datasets for advancing the field. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 429,059 |
2206.07632 | Exploring Chemical Space with Score-based Out-of-distribution Generation | A well-known limitation of existing molecular generative models is that the generated molecules highly resemble those in the training set. To generate truly novel molecules that may have even better properties for de novo drug discovery, more powerful exploration in the chemical space is necessary. To this end, we propose Molecular Out-Of-distribution Diffusion(MOOD), a score-based diffusion scheme that incorporates out-of-distribution (OOD) control in the generative stochastic differential equation (SDE) with simple control of a hyperparameter, thus requires no additional costs. Since some novel molecules may not meet the basic requirements of real-world drugs, MOOD performs conditional generation by utilizing the gradients from a property predictor that guides the reverse-time diffusion process to high-scoring regions according to target properties such as protein-ligand interactions, drug-likeness, and synthesizability. This allows MOOD to search for novel and meaningful molecules rather than generating unseen yet trivial ones. We experimentally validate that MOOD is able to explore the chemical space beyond the training distribution, generating molecules that outscore ones found with existing methods, and even the top 0.01% of the original training pool. Our code is available at https://github.com/SeulLee05/MOOD. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 302,811 |
2106.12444 | Prospects for Analog Circuits in Deep Networks | Operations typically used in machine learning al-gorithms (e.g. adds and soft max) can be implemented bycompact analog circuits. Analog Application-Specific Integrated Circuit (ASIC) designs that implement these algorithms using techniques such as charge sharing circuits and subthreshold transistors, achieve very high power efficiencies. With the recent advances in deep learning algorithms, focus has shifted to hardware digital accelerator designs that implement the prevalent matrix-vector multiplication operations. Power in these designs is usually dominated by the memory access power of off-chip DRAM needed for storing the network weights and activations. Emerging dense non-volatile memory technologies can help to provide on-chip memory and analog circuits can be well suited to implement the needed multiplication-vector operations coupled with in-computing memory approaches. This paper presents abrief review of analog designs that implement various machine learning algorithms. It then presents an outlook for the use ofanalog circuits in low-power deep network accelerators suitable for edge or tiny machine learning applications. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | true | 242,726 |
2408.01035 | Structure from Motion-based Motion Estimation and 3D Reconstruction of
Unknown Shaped Space Debris | With the boost in the number of spacecraft launches in the current decades, the space debris problem is daily becoming significantly crucial. For sustainable space utilization, the continuous removal of space debris is the most severe problem for humanity. To maximize the reliability of the debris capture mission in orbit, accurate motion estimation of the target is essential. Space debris has lost its attitude and orbit control capabilities, and its shape is unknown due to the break. This paper proposes the Structure from Motion-based algorithm to perform unknown shaped space debris motion estimation with limited resources, where only 2D images are required as input. The method then outputs the reconstructed shape of the unknown object and the relative pose trajectory between the target and the camera simultaneously, which are exploited to estimate the target's motion. The method is quantitatively validated with the realistic image dataset generated by the microgravity experiment in a 2D air-floating testbed and 3D kinematic simulation. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 478,076 |
2102.12463 | Generating and Blending Game Levels via Quality-Diversity in the Latent
Space of a Variational Autoencoder | Several works have demonstrated the use of variational autoencoders (VAEs) for generating levels in the style of existing games and blending levels across different games. Further, quality-diversity (QD) algorithms have also become popular for generating varied game content by using evolution to explore a search space while focusing on both variety and quality. To reap the benefits of both these approaches, we present a level generation and game blending approach that combines the use of VAEs and QD algorithms. Specifically, we train VAEs on game levels and run the MAP-Elites QD algorithm using the learned latent space of the VAE as the search space. The latent space captures the properties of the games whose levels we want to generate and blend, while MAP-Elites searches this latent space to find a diverse set of levels optimizing a given objective such as playability. We test our method using models for 5 different platformer games as well as a blended domain spanning 3 of these games. We refer to using MAP-Elites for blending as Blend-Elites. Our results show that MAP-Elites in conjunction with VAEs enables the generation of a diverse set of playable levels not just for each individual game but also for the blended domain while illuminating game-specific regions of the blended latent space. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 221,738 |
2410.00502 | Multi-Target Cross-Lingual Summarization: a novel task and a
language-neutral approach | Cross-lingual summarization aims to bridge language barriers by summarizing documents in different languages. However, ensuring semantic coherence across languages is an overlooked challenge and can be critical in several contexts. To fill this gap, we introduce multi-target cross-lingual summarization as the task of summarizing a document into multiple target languages while ensuring that the produced summaries are semantically similar. We propose a principled re-ranking approach to this problem and a multi-criteria evaluation protocol to assess semantic coherence across target languages, marking a first step that will hopefully stimulate further research on this problem. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 493,406 |
1903.01284 | Relation Extraction Datasets in the Digital Humanities Domain and their
Evaluation with Word Embeddings | In this research, we manually create high-quality datasets in the digital humanities domain for the evaluation of language models, specifically word embedding models. The first step comprises the creation of unigram and n-gram datasets for two fantasy novel book series for two task types each, analogy and doesn't-match. This is followed by the training of models on the two book series with various popular word embedding model types such as word2vec, GloVe, fastText, or LexVec. Finally, we evaluate the suitability of word embedding models for such specific relation extraction tasks in a situation of comparably small corpus sizes. In the evaluations, we also investigate and analyze particular aspects such as the impact of corpus term frequencies and task difficulty on accuracy. The datasets, and the underlying system and word embedding models are available on github and can be easily extended with new datasets and tasks, be used to reproduce the presented results, or be transferred to other domains. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 123,225 |
1907.03641 | Smart Households Demand Response Management with Micro Grid | Nowadays the emerging smart grid technology opens up the possibility of two-way communication between customers and energy utilities. Demand Response Management (DRM) offers the promise of saving money for commercial customers and households while helps utilities operate more efficiently. In this paper, an Incentive-based Demand Response Optimization (IDRO) model is proposed to efficiently schedule household appliances for minimum usage during peak hours. The proposed method is a multi-objective optimization technique based on Nonlinear Auto-Regressive Neural Network (NAR-NN) which considers energy provided by the utility and rooftop installed photovoltaic (PV) system. The proposed method is tested and verified using 300 case studies (household). Data analysis for a period of one year shows a noticeable improvement in power factor and customers bill. | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | 137,901 |
2007.03032 | Continual Learning in Human Activity Recognition: an Empirical Analysis
of Regularization | Given the growing trend of continual learning techniques for deep neural networks focusing on the domain of computer vision, there is a need to identify which of these generalizes well to other tasks such as human activity recognition (HAR). As recent methods have mostly been composed of loss regularization terms and memory replay, we provide a constituent-wise analysis of some prominent task-incremental learning techniques employing these on HAR datasets. We find that most regularization approaches lack substantial effect and provide an intuition of when they fail. Thus, we make the case that the development of continual learning algorithms should be motivated by rather diverse task domains. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 185,922 |
2401.00870 | ConfusionPrompt: Practical Private Inference for Online Large Language
Models | State-of-the-art large language models (LLMs) are typically deployed as online services, requiring users to transmit detailed prompts to cloud servers. This raises significant privacy concerns. In response, we introduce ConfusionPrompt, a novel framework for private LLM inference that protects user privacy by: (i) decomposing the original prompt into smaller sub-prompts, and (ii) generating pseudo-prompts alongside the genuine sub-prompts, which are then sent to the LLM. The server responses are later recomposed by the user to reconstruct the final output. This approach offers key advantages over previous LLM privacy protection methods: (i) it integrates seamlessly with existing black-box LLMs, and (ii) it delivers a significantly improved privacy-utility trade-off compared to existing text perturbation methods. We also develop a $(\lambda, \mu, \rho)$-privacy model to formulate the requirements for a privacy-preserving group of prompts and provide a complexity analysis to justify the role of prompt decomposition. Our empirical evaluation shows that ConfusionPrompt achieves significantly higher utility than local inference methods using open-source models and perturbation-based techniques, while also reducing memory consumption compared to open-source LLMs. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 419,136 |
2111.07256 | Towards annotation of text worlds in a literary work | Literary texts are usually rich in meanings and their interpretation complicates corpus studies and automatic processing. There have been several attempts to create collections of literary texts with annotation of literary elements like the author's speech, characters, events, scenes etc. However, they resulted in small collections and standalone rules for annotation. The present article describes an experiment on lexical annotation of text worlds in a literary work and quantitative methods of their comparison. The experiment shows that for a well-agreed tag assignment annotation rules should be set much more strictly. However, if borders between text worlds and other elements are the result of a subjective interpretation, they should be modeled as fuzzy entities. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 266,320 |
2302.09155 | Med-EASi: Finely Annotated Dataset and Models for Controllable
Simplification of Medical Texts | Automatic medical text simplification can assist providers with patient-friendly communication and make medical texts more accessible, thereby improving health literacy. But curating a quality corpus for this task requires the supervision of medical experts. In this work, we present $\textbf{Med-EASi}$ ($\underline{\textbf{Med}}$ical dataset for $\underline{\textbf{E}}$laborative and $\underline{\textbf{A}}$bstractive $\underline{\textbf{Si}}$mplification), a uniquely crowdsourced and finely annotated dataset for supervised simplification of short medical texts. Its $\textit{expert-layman-AI collaborative}$ annotations facilitate $\textit{controllability}$ over text simplification by marking four kinds of textual transformations: elaboration, replacement, deletion, and insertion. To learn medical text simplification, we fine-tune T5-large with four different styles of input-output combinations, leading to two control-free and two controllable versions of the model. We add two types of $\textit{controllability}$ into text simplification, by using a multi-angle training approach: $\textit{position-aware}$, which uses in-place annotated inputs and outputs, and $\textit{position-agnostic}$, where the model only knows the contents to be edited, but not their positions. Our results show that our fine-grained annotations improve learning compared to the unannotated baseline. Furthermore, $\textit{position-aware}$ control generates better simplification than the $\textit{position-agnostic}$ one. The data and code are available at https://github.com/Chandrayee/CTRL-SIMP. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 346,300 |
2501.01874 | DFF: Decision-Focused Fine-tuning for Smarter Predict-then-Optimize with
Limited Data | Decision-focused learning (DFL) offers an end-to-end approach to the predict-then-optimize (PO) framework by training predictive models directly on decision loss (DL), enhancing decision-making performance within PO contexts. However, the implementation of DFL poses distinct challenges. Primarily, DL can result in deviation from the physical significance of the predictions under limited data. Additionally, some predictive models are non-differentiable or black-box, which cannot be adjusted using gradient-based methods. To tackle the above challenges, we propose a novel framework, Decision-Focused Fine-tuning (DFF), which embeds the DFL module into the PO pipeline via a novel bias correction module. DFF is formulated as a constrained optimization problem that maintains the proximity of the DL-enhanced model to the original predictive model within a defined trust region. We theoretically prove that DFF strictly confines prediction bias within a predetermined upper bound, even with limited datasets, thereby substantially reducing prediction shifts caused by DL under limited data. Furthermore, the bias correction module can be integrated into diverse predictive models, enhancing adaptability to a broad range of PO tasks. Extensive evaluations on synthetic and real-world datasets, including network flow, portfolio optimization, and resource allocation problems with different predictive models, demonstrate that DFF not only improves decision performance but also adheres to fine-tuning constraints, showcasing robust adaptability across various scenarios. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 522,247 |
2202.00772 | PiP-X: Online feedback motion planning/replanning in dynamic
environments using invariant funnels | Computing kinodynamically feasible motion plans and repairing them on-the-fly as the environment changes is a challenging, yet relevant problem in robot-navigation. We propose a novel online single-query sampling-based motion re-planning algorithm - PiP-X, using finite-time invariant sets - funnels. We combine concepts from sampling-based methods, nonlinear systems analysis and control theory to create a single framework that enables feedback motion re-planning for any general nonlinear dynamical system in dynamic workspaces. A volumetric funnel-graph is constructed using sampling-based methods, and an optimal funnel-path from robot configuration to a desired goal region is then determined by computing the shortest-path subtree in it. Analysing and formally quantifying the stability of trajectories using Lyapunov level-set theory ensures kinodynamic feasibility and guaranteed set-invariance of the solution-paths. The use of incremental search techniques and a pre-computed library of motion-primitives ensure that our method can be used for quick online rewiring of controllable motion plans in densely cluttered and dynamic environments. We represent traversability and sequencibility of trajectories together in the form of an augmented directed-graph, helping us leverage discrete graph-based replanning algorithms to efficiently recompute feasible and controllable motion plans that are volumetric in nature. We validate our approach on a simulated 6DOF quadrotor platform in a variety of scenarios within a maze and random forest environment. From repeated experiments, we analyse the performance in terms of algorithm-success and length of traversed-trajectory. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 278,252 |
1605.04672 | A Critical Examination of RESCAL for Completion of Knowledge Bases with
Transitive Relations | Link prediction in large knowledge graphs has received a lot of attention recently because of its importance for inferring missing relations and for completing and improving noisily extracted knowledge graphs. Over the years a number of machine learning researchers have presented various models for predicting the presence of missing relations in a knowledge base. Although all the previous methods are presented with empirical results that show high performance on select datasets, there is almost no previous work on understanding the connection between properties of a knowledge base and the performance of a model. In this paper we analyze the RESCAL method and prove that it can not encode asymmetric transitive relations in knowledge bases. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | true | false | 55,904 |
2310.18479 | Weighted Sampled Split Learning (WSSL): Balancing Privacy, Robustness,
and Fairness in Distributed Learning Environments | This study presents Weighted Sampled Split Learning (WSSL), an innovative framework tailored to bolster privacy, robustness, and fairness in distributed machine learning systems. Unlike traditional approaches, WSSL disperses the learning process among multiple clients, thereby safeguarding data confidentiality. Central to WSSL's efficacy is its utilization of weighted sampling. This approach ensures equitable learning by tactically selecting influential clients based on their contributions. Our evaluation of WSSL spanned various client configurations and employed two distinct datasets: Human Gait Sensor and CIFAR-10. We observed three primary benefits: heightened model accuracy, enhanced robustness, and maintained fairness across diverse client compositions. Notably, our distributed frameworks consistently surpassed centralized counterparts, registering accuracy peaks of 82.63% and 75.51% for the Human Gait Sensor and CIFAR-10 datasets, respectively. These figures contrast with the top accuracies of 81.12% and 58.60% achieved by centralized systems. Collectively, our findings champion WSSL as a potent and scalable successor to conventional centralized learning, marking it as a pivotal stride forward in privacy-focused, resilient, and impartial distributed machine learning. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 403,562 |
2403.11795 | Low-Cost Privacy-Aware Decentralized Learning | This paper introduces ZIP-DL, a novel privacy-aware decentralized learning (DL) algorithm that exploits correlated noise to provide strong privacy protection against a local adversary while yielding efficient convergence guarantees for a low communication cost. The progressive neutralization of the added noise during the distributed aggregation process results in ZIP-DL fostering a high model accuracy under privacy guarantees. ZIP-DL further uses a single communication round between each gradient descent, thus minimizing communication overhead. We provide theoretical guarantees for both convergence speed and privacy guarantees, thereby making ZIP-DL applicable to practical scenarios. Our extensive experimental study shows that ZIP-DL significantly outperforms the state-of-the-art in terms of vulnerability/accuracy trade-off. In particular, ZIP-DL (i) reduces the efficacy of linkability attacks by up to 52 percentage points compared to baseline DL, (ii) improves accuracy by up to 37 percent w.r.t. the state-of-the-art privacy-preserving mechanism operating under the same threat model as ours, when configured to provide the same protection against membership inference attacks, and (iii) reduces communication by up to 10.5x against the same competitor for the same level of protection. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 438,866 |
2110.02896 | Predicting the Popularity of Games on Steam | The video game industry has seen rapid growth over the last decade. Thousands of video games are released and played by millions of people every year, creating a large community of players. Steam is a leading gaming platform and social networking site, which allows its users to purchase and store games. A by-product of Steam is a large database of information about games, players, and gaming behavior. In this paper, we take recent video games released on Steam and aim to discover the relation between game popularity and a game's features that can be acquired through Steam. We approach this task by predicting the popularity of Steam games in the early stages after their release and we use a Bayesian approach to understand the influence of a game's price, size, supported languages, release date, and genres on its player count. We implement several models and discover that a genre-based hierarchical approach achieves the best performance. We further analyze the model and interpret its coefficients, which indicate that games released at the beginning of the month and games of certain genres correlate with game popularity. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 259,291 |
2202.01473 | A multi-domain virtual network embedding algorithm with delay prediction | Virtual network embedding (VNE) is an crucial part of network virtualization (NV), which aims to map the virtual networks (VNs) to a shared substrate network (SN). With the emergence of various delay-sensitive applications, how to improve the delay performance of the system has become a hot topic in academic circles. Based on extensive research, we proposed a multi-domain virtual network embedding algorithm based on delay prediction (DP-VNE). Firstly, the candidate physical nodes are selected by estimating the delay of virtual requests, then particle swarm optimization (PSO) algorithm is used to optimize the mapping process, so as to reduce the delay of the system. The simulation results show that compared with the other three advanced algorithms, the proposed algorithm can significantly reduce the system delay while keeping other indicators unaffected. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 278,496 |
2409.00890 | Towards Investigating Biases in Spoken Conversational Search | Voice-based systems like Amazon Alexa, Google Assistant, and Apple Siri, along with the growing popularity of OpenAI's ChatGPT and Microsoft's Copilot, serve diverse populations, including visually impaired and low-literacy communities. This reflects a shift in user expectations from traditional search to more interactive question-answering models. However, presenting information effectively in voice-only channels remains challenging due to their linear nature. This limitation can impact the presentation of complex queries involving controversial topics with multiple perspectives. Failing to present diverse viewpoints may perpetuate or introduce biases and affect user attitudes. Balancing information load and addressing biases is crucial in designing a fair and effective voice-based system. To address this, we (i) review how biases and user attitude changes have been studied in screen-based web search, (ii) address challenges in studying these changes in voice-based settings like SCS, (iii) outline research questions, and (iv) propose an experimental setup with variables, data, and instruments to explore biases in a voice-based setting like Spoken Conversational Search. | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 485,119 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.