id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2
classes | cs.CE bool 2
classes | cs.SD bool 2
classes | cs.SI bool 2
classes | cs.AI bool 2
classes | cs.IR bool 2
classes | cs.LG bool 2
classes | cs.RO bool 2
classes | cs.CL bool 2
classes | cs.IT bool 2
classes | cs.SY bool 2
classes | cs.CV bool 2
classes | cs.CR bool 2
classes | cs.CY bool 2
classes | cs.MA bool 2
classes | cs.NE bool 2
classes | cs.DB bool 2
classes | Other bool 2
classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2007.12102 | Efficient and near-optimal algorithms for sampling small connected
subgraphs | We study the following problem: given an integer $k \ge 3$ and a simple graph $G$, sample a connected induced $k$-node subgraph of $G$ uniformly at random. This is a fundamental graph mining primitive with applications in social network analysis, bioinformatics, and more. Surprisingly, no efficient algorithm is known for uniform sampling; the only somewhat efficient algorithms available yield samples that are only approximately uniform, with running times that are unclear or suboptimal. In this work we provide: (i) a near-optimal mixing time bound for a well-known random walk technique, (ii) the first efficient algorithm for truly uniform graphlet sampling, and (iii) the first sublinear-time algorithm for $\epsilon$-uniform graphlet sampling. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 188,732 |
1903.05084 | Decay Replay Mining to Predict Next Process Events | In complex processes, various events can happen in different sequences. The prediction of the next event given an a-priori process state is of importance in such processes. Recent methods have proposed deep learning techniques such as recurrent neural networks, developed on raw event logs, to predict the next event from a process state. However, such deep learning models by themselves lack a clear representation of the process states. At the same time, recent methods have neglected the time feature of event instances. In this paper, we take advantage of Petri nets as a powerful tool in modeling complex process behaviors considering time as an elemental variable. We propose an approach which starts from a Petri net process model constructed by a process mining algorithm. We enhance the Petri net model with time decay functions to create continuous process state samples. Finally, we use these samples in combination with discrete token movement counters and Petri net markings to train a deep learning model that predicts the next event. We demonstrate significant performance improvements and outperform the state-of-the-art methods on nine real-world benchmark event logs. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 124,097 |
1410.5020 | Sparse Beamforming and User-Centric Clustering for Downlink Cloud Radio
Access Network | This paper considers a downlink cloud radio access network (C-RAN) in which all the base-stations (BSs) are connected to a central computing cloud via digital backhaul links with finite capacities. Each user is associated with a user-centric cluster of BSs; the central processor shares the user's data with the BSs in the cluster, which then cooperatively serve the user through joint beamforming. Under this setup, this paper investigates the user scheduling, BS clustering and beamforming design problem from a network utility maximization perspective. Differing from previous works, this paper explicitly considers the per-BS backhaul capacity constraints. We formulate the network utility maximization problem for the downlink C-RAN under two different models depending on whether the BS clustering for each user is dynamic or static over different user scheduling time slots. In the former case, the user-centric BS cluster is dynamically optimized for each scheduled user along with the beamforming vector in each time-frequency slot, while in the latter case the user-centric BS cluster is fixed for each user and we jointly optimize the user scheduling and the beamforming vector to account for the backhaul constraints. In both cases, the nonconvex per-BS backhaul constraints are approximated using the reweighted l1-norm technique. This approximation allows us to reformulate the per-BS backhaul constraints into weighted per-BS power constraints and solve the weighted sum rate maximization problem through a generalized weighted minimum mean square error approach. This paper shows that the proposed dynamic clustering algorithm can achieve significant performance gain over existing naive clustering schemes. This paper also proposes two heuristic static clustering schemes that can already achieve a substantial portion of the gain. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 36,858 |
1810.00162 | NICE: Noise Injection and Clamping Estimation for Neural Network
Quantization | Convolutional Neural Networks (CNN) are very popular in many fields including computer vision, speech recognition, natural language processing, to name a few. Though deep learning leads to groundbreaking performance in these domains, the networks used are very demanding computationally and are far from real-time even on a GPU, which is not power efficient and therefore does not suit low power systems such as mobile devices. To overcome this challenge, some solutions have been proposed for quantizing the weights and activations of these networks, which accelerate the runtime significantly. Yet, this acceleration comes at the cost of a larger error. The \uniqname method proposed in this work trains quantized neural networks by noise injection and a learned clamping, which improve the accuracy. This leads to state-of-the-art results on various regression and classification tasks, e.g., ImageNet classification with architectures such as ResNet-18/34/50 with low as 3-bit weights and activations. We implement the proposed solution on an FPGA to demonstrate its applicability for low power real-time applications. The implementation of the paper is available at https://github.com/Lancer555/NICE | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 109,113 |
1006.4114 | How to build a DNA search engine like Google? | This paper proposed a new method to build the large scale DNA sequences search system based on web search engine technology. We give a very brief introduction for the methods used in search engine first. Then how to build a DNA search system like Google is illustrated in detail. Since there is no local alignment process, this system is able to provide the ms level search services for billions of DNA sequences in a typical server. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | 6,848 |
1904.13213 | Topic Classification Method for Analyzing Effect of eWOM on Consumer
Game Sales | Electronic word-of-mouth (eWOM) has become an important resource for the analysis of marketing research. In this study, in order to analyze user needs for consumer game software, we focus on tweet data. And we proposed topic extraction method using entropy-based feature selection based feature expansion. We also applied it to the classification of the data extracted from tweet data by using SVM. As a result, we achieved a 0.63 F-measure. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 129,324 |
2408.08855 | DPA: Dual Prototypes Alignment for Unsupervised Adaptation of
Vision-Language Models | Vision-language models (VLMs), e.g., CLIP, have shown remarkable potential in zero-shot image classification. However, adapting these models to new domains remains challenging, especially in unsupervised settings where labeled data is unavailable. Recent research has proposed pseudo-labeling approaches to adapt CLIP in an unsupervised manner using unlabeled target data. Nonetheless, these methods struggle due to noisy pseudo-labels resulting from the misalignment between CLIP's visual and textual representations. This study introduces DPA, an unsupervised domain adaptation method for VLMs. DPA introduces the concept of dual prototypes, acting as distinct classifiers, along with the convex combination of their outputs, thereby leading to accurate pseudo-label construction. Next, it ranks pseudo-labels to facilitate robust self-training, particularly during early training. Finally, it addresses visual-textual misalignment by aligning textual prototypes with image prototypes to further improve the adaptation performance. Experiments on 13 downstream vision tasks demonstrate that DPA significantly outperforms zero-shot CLIP and the state-of-the-art unsupervised adaptation baselines. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 481,180 |
1903.09731 | Expert-Augmented Machine Learning | Machine Learning is proving invaluable across disciplines. However, its success is often limited by the quality and quantity of available data, while its adoption by the level of trust that models afford users. Human vs. machine performance is commonly compared empirically to decide whether a certain task should be performed by a computer or an expert. In reality, the optimal learning strategy may involve combining the complementary strengths of man and machine. Here we present Expert-Augmented Machine Learning (EAML), an automated method that guides the extraction of expert knowledge and its integration into machine-learned models. We use a large dataset of intensive care patient data to predict mortality and show that we can extract expert knowledge using an online platform, help reveal hidden confounders, improve generalizability on a different population and learn using less data. EAML presents a novel framework for high performance and dependable machine learning in critical applications. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 125,110 |
1502.02803 | A TDOA technique with Super-Resolution based on the Volume
Cross-Correlation Function | Time Difference of Arrival (TDOA) is widely used in wireless localization systems. Among the enormous approaches of TDOA, high resolution TDOA algorithms have drawn much attention for its ability to resolve closely spaced signal delays in multipath environment. However, the state-of-art high resolution TDOA algorithms still have performance weakness on resolving time delays in a wireless channel with dense multipath effect, as well as difficulties in implementation for their high computation complexity. In this paper, we propose a novel TDOA algorithm with super resolution based on a multi-dimensional cross-correlation function: the Volume Cross-Correlation Function (VCC). The proposed TDOA algorithm has excellent time resolution capability in multipath environment, and it also has a much lower computational complexity. Because our algorithm does not require priori knowledge about the waveform or power spectrum of transmitted signals, it has great potential of usage in various passive wireless localization systems. Numerical simulations is also provided to demonstrate the validity of our conclusion. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 40,087 |
1706.04313 | Teaching Compositionality to CNNs | Convolutional neural networks (CNNs) have shown great success in computer vision, approaching human-level performance when trained for specific tasks via application-specific loss functions. In this paper, we propose a method for augmenting and training CNNs so that their learned features are compositional. It encourages networks to form representations that disentangle objects from their surroundings and from each other, thereby promoting better generalization. Our method is agnostic to the specific details of the underlying CNN to which it is applied and can in principle be used with any CNN. As we show in our experiments, the learned representations lead to feature activations that are more localized and improve performance over non-compositional baselines in object recognition tasks. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 75,321 |
1703.03186 | Segmenting Dermoscopic Images | We propose an automatic algorithm, named SDI, for the segmentation of skin lesions in dermoscopic images, articulated into three main steps: selection of the image ROI, selection of the segmentation band, and segmentation. We present extensive experimental results achieved by the SDI algorithm on the lesion segmentation dataset made available for the ISIC 2017 challenge on Skin Lesion Analysis Towards Melanoma Detection, highlighting its advantages and disadvantages. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 69,698 |
2102.12099 | Lossless Compression of Efficient Private Local Randomizers | Locally Differentially Private (LDP) Reports are commonly used for collection of statistics and machine learning in the federated setting. In many cases the best known LDP algorithms require sending prohibitively large messages from the client device to the server (such as when constructing histograms over large domain or learning a high-dimensional model). This has led to significant efforts on reducing the communication cost of LDP algorithms. At the same time LDP reports are known to have relatively little information about the user's data due to randomization. Several schemes are known that exploit this fact to design low-communication versions of LDP algorithm but all of them do so at the expense of a significant loss in utility. Here we demonstrate a general approach that, under standard cryptographic assumptions, compresses every efficient LDP algorithm with negligible loss in privacy and utility guarantees. The practical implication of our result is that in typical applications the message can be compressed to the size of the server's pseudo-random generator seed. More generally, we relate the properties of an LDP randomizer to the power of a pseudo-random generator that suffices for compressing the LDP randomizer. From this general approach we derive low-communication algorithms for the problems of frequency estimation and high-dimensional mean estimation. Our algorithms are simpler and more accurate than existing low-communication LDP algorithms for these well-studied problems. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 221,617 |
1907.05283 | A Comparison of Super-Resolution and Nearest Neighbors Interpolation
Applied to Object Detection on Satellite Data | As Super-Resolution (SR) has matured as a research topic, it has been applied to additional topics beyond image reconstruction. In particular, combining classification or object detection tasks with a super-resolution preprocessing stage has yielded improvements in accuracy especially with objects that are small relative to the scene. While SR has shown promise, a study comparing SR and naive upscaling methods such as Nearest Neighbors (NN) interpolation when applied as a preprocessing step for object detection has not been performed. We apply the topic to satellite data and compare the Multi-scale Deep Super-Resolution (MDSR) system to NN on the xView challenge dataset. To do so, we propose a pipeline for processing satellite data that combines multi-stage image tiling and upscaling, the YOLOv2 object detection architecture, and label stitching. We compare the effects of training models using an upscaling factor of 4, upscaling images from 30cm Ground Sample Distance (GSD) to an effective GSD of 7.5cm. Upscaling by this factor significantly improves detection results, increasing Average Precision (AP) of a generalized vehicle class by 23 percent. We demonstrate that while SR produces upscaled images that are more visually pleasing than their NN counterparts, object detection networks see little difference in accuracy with images upsampled using NN obtaining nearly identical results to the MDSRx4 enhanced images with a difference of 0.0002 AP between the two methods. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 138,322 |
1108.5475 | The Dimension of Subcode-Subfields of Shortened Generalized Reed Solomon
Codes | Reed-Solomon (RS) codes are among the most ubiquitous codes due to their good parameters as well as efficient encoding and decoding procedures. However, RS codes suffer from having a fixed length. In many applications where the length is static, the appropriate length can be obtained by an RS code by shortening or puncturing. Generalized Reed-Solomon (GRS) codes are a generalization of RS codes, whose subfield-subcodes are extensively studied. In this paper we show that a particular class of GRS codes produces many subfield-subcodes with large dimension. An algorithm for searching through the codes is presented as well as a list of new codes obtained from this method. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 11,835 |
2312.02813 | BIVDiff: A Training-Free Framework for General-Purpose Video Synthesis
via Bridging Image and Video Diffusion Models | Diffusion models have made tremendous progress in text-driven image and video generation. Now text-to-image foundation models are widely applied to various downstream image synthesis tasks, such as controllable image generation and image editing, while downstream video synthesis tasks are less explored for several reasons. First, it requires huge memory and computation overhead to train a video generation foundation model. Even with video foundation models, additional costly training is still required for downstream video synthesis tasks. Second, although some works extend image diffusion models into videos in a training-free manner, temporal consistency cannot be well preserved. Finally, these adaption methods are specifically designed for one task and fail to generalize to different tasks. To mitigate these issues, we propose a training-free general-purpose video synthesis framework, coined as {\bf BIVDiff}, via bridging specific image diffusion models and general text-to-video foundation diffusion models. Specifically, we first use a specific image diffusion model (e.g., ControlNet and Instruct Pix2Pix) for frame-wise video generation, then perform Mixed Inversion on the generated video, and finally input the inverted latents into the video diffusion models (e.g., VidRD and ZeroScope) for temporal smoothing. This decoupled framework enables flexible image model selection for different purposes with strong task generalization and high efficiency. To validate the effectiveness and general use of BIVDiff, we perform a wide range of video synthesis tasks, including controllable video generation, video editing, video inpainting, and outpainting. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 413,014 |
2101.11789 | Augmenting Proposals by the Detector Itself | Lacking enough high quality proposals for RoI box head has impeded two-stage and multi-stage object detectors for a long time, and many previous works try to solve it via improving RPN's performance or manually generating proposals from ground truth. However, these methods either need huge training and inference costs or bring little improvements. In this paper, we design a novel training method named APDI, which means augmenting proposals by the detector itself and can generate proposals with higher quality. Furthermore, APDI makes it possible to integrate IoU head into RoI box head. And it does not add any hyperparameter, which is beneficial for future research and downstream tasks. Extensive experiments on COCO dataset show that our method brings at least 2.7 AP improvements on Faster R-CNN with various backbones, and APDI can cooperate with advanced RPNs, such as GA-RPN and Cascade RPN, to obtain extra gains. Furthermore, it brings significant improvements on Cascade R-CNN. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 217,390 |
2203.13686 | Image Compression and Actionable Intelligence With Deep Neural Networks | If a unit cannot receive intelligence from a source due to external factors, we consider them disadvantaged users. We categorize this as a preoccupied unit working on a low connectivity device on the edge. This case requires that we use a different approach to deliver intelligence, particularly satellite imagery information, than normally employed. To address this, we propose a survey of information reduction techniques to deliver the information from a satellite image in a smaller package. We investigate four techniques to aid in the reduction of delivered information: traditional image compression, neural network image compression, object detection image cutout, and image to caption. Each of these mechanisms have their benefits and tradeoffs when considered for a disadvantaged user. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 287,725 |
1907.01805 | Sensitivity of Legged Balance Control to Uncertainties and Sampling
Period | We propose to quantify the effect of sensor and actuator uncertainties on the control of the center of mass and center of pressure in legged robots, since this is central for maintaining their balance with a limited support polygon. Our approach is based on robust control theory, considering uncertainties that can take any value between specified bounds. This provides a principled approach to deciding optimal feedback gains. Surprisingly, our main observation is that the sampling period can be as long as 200 ms with literally no impact on maximum tracking error and, as a result, on the guarantee that balance can be maintained safely. Our findings are validated in simulations and experiments with the torque-controlled humanoid robot Toro developed at DLR. The proposed mathematical derivations and results apply nevertheless equally to biped and quadruped robots. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 137,442 |
1912.10185 | Jacobian Adversarially Regularized Networks for Robustness | Adversarial examples are crafted with imperceptible perturbations with the intent to fool neural networks. Against such attacks, adversarial training and its variants stand as the strongest defense to date. Previous studies have pointed out that robust models that have undergone adversarial training tend to produce more salient and interpretable Jacobian matrices than their non-robust counterparts. A natural question is whether a model trained with an objective to produce salient Jacobian can result in better robustness. This paper answers this question with affirmative empirical results. We propose Jacobian Adversarially Regularized Networks (JARN) as a method to optimize the saliency of a classifier's Jacobian by adversarially regularizing the model's Jacobian to resemble natural training images. Image classifiers trained with JARN show improved robust accuracy compared to standard models on the MNIST, SVHN and CIFAR-10 datasets, uncovering a new angle to boost robustness without using adversarial training examples. | false | false | false | false | false | false | false | false | false | false | false | true | true | false | false | true | false | false | 158,257 |
2112.14602 | Modified DDPG car-following model with a real-world human driving
experience with CARLA simulator | In the autonomous driving field, fusion of human knowledge into Deep Reinforcement Learning (DRL) is often based on the human demonstration recorded in a simulated environment. This limits the generalization and the feasibility of application in real-world traffic. We propose a two-stage DRL method to train a car-following agent, that modifies the policy by leveraging the real-world human driving experience and achieves performance superior to the pure DRL agent. Training a DRL agent is done within CARLA framework with Robot Operating System (ROS). For evaluation, we designed different driving scenarios to compare the proposed two-stage DRL car-following agent with other agents. After extracting the "good" behavior from the human driver, the agent becomes more efficient and reasonable, which makes this autonomous agent more suitable for Human-Robot Interaction (HRI) traffic. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 273,569 |
2410.10391 | Efficiently Obtaining Reachset Conformance for the Formal Analysis of
Robotic Contact Tasks | Formal verification of robotic tasks requires a simple yet conformant model of the used robot. We present the first work on generating reachset conformant models for robotic contact tasks considering hybrid (mixed continuous and discrete) dynamics. Reachset conformance requires that the set of reachable outputs of the abstract model encloses all previous measurements to transfer safety properties. Aiming for industrial applications, we describe the system using a simple hybrid automaton with linear dynamics. We inject non-determinism into the continuous dynamics and the discrete transitions, and we optimally identify all model parameters together with the non-determinism required to capture the recorded behaviors. Using two 3-DOF robots, we show that our approach can effectively generate models to capture uncertainties in system behavior and substantially reduce the required testing effort in industrial applications. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 498,052 |
2002.00530 | Toward Autonomous Robotic Micro-Suturing using Optical Coherence
Tomography Calibration and Path Planning | Robotic automation has the potential to assist human surgeons in performing suturing tasks in microsurgery, and in order to do so a robot must be able to guide a needle with sub-millimeter precision through soft tissue. This paper presents a robotic suturing system that uses 3D optical coherence tomography (OCT) system for imaging feedback. Calibration of the robot-OCT and robot-needle transforms, wound detection, keypoint identification, and path planning are all performed automatically. The calibration method handles pose uncertainty when the needle is grasped using a variant of iterative closest points. The path planner uses the identified wound shape to calculate needle entry and exit points to yield an evenly-matched wound shape after closure. Experiments on tissue phantoms and animal tissue demonstrate that the system can pass a suture needle through wounds with 0.27 mm overall accuracy in achieving the planned entry and exit points. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 162,379 |
2502.07250 | NARCE: A Mamba-Based Neural Algorithmic Reasoner Framework for Online
Complex Event Detection | Current machine learning models excel in short-span perception tasks but struggle to derive high-level insights from long-term observation, a capability central to understanding complex events (CEs). CEs, defined as sequences of short-term atomic events (AEs) governed by spatiotemporal rules, are challenging to detect online due to the need to extract meaningful patterns from long and noisy sensor data while ignoring irrelevant events. We hypothesize that state-based methods are well-suited for CE detection, as they capture event progression through state transitions without requiring long-term memory. Baseline experiments validate this, demonstrating that the state-space model Mamba outperforms existing architectures. However, Mamba's reliance on extensive labeled data, which are difficult to obtain, motivates our second hypothesis: decoupling CE rule learning from noisy sensor data can reduce data requirements. To address this, we propose NARCE, a framework that combines Neural Algorithmic Reasoning (NAR) to split the task into two components: (i) learning CE rules independently of sensor data using synthetic concept traces generated by LLMs and (ii) mapping sensor inputs to these rules via an adapter. Our results show that NARCE outperforms baselines in accuracy, generalization to unseen and longer sensor data, and data efficiency, significantly reducing annotation costs while advancing robust CE detection. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 532,507 |
1703.09157 | Reweighted Infrared Patch-Tensor Model With Both Non-Local and Local
Priors for Single-Frame Small Target Detection | Many state-of-the-art methods have been proposed for infrared small target detection. They work well on the images with homogeneous backgrounds and high-contrast targets. However, when facing highly heterogeneous backgrounds, they would not perform very well, mainly due to: 1) the existence of strong edges and other interfering components, 2) not utilizing the priors fully. Inspired by this, we propose a novel method to exploit both local and non-local priors simultaneously. Firstly, we employ a new infrared patch-tensor (IPT) model to represent the image and preserve its spatial correlations. Exploiting the target sparse prior and background non-local self-correlation prior, the target-background separation is modeled as a robust low-rank tensor recovery problem. Moreover, with the help of the structure tensor and reweighted idea, we design an entry-wise local-structure-adaptive and sparsity enhancing weight to replace the globally constant weighting parameter. The decomposition could be achieved via the element-wise reweighted higher-order robust principal component analysis with an additional convergence condition according to the practical situation of target detection. Extensive experiments demonstrate that our model outperforms the other state-of-the-arts, in particular for the images with very dim targets and heavy clutters. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 70,708 |
2011.05774 | Influencing dynamics on social networks without knowledge of network
microstructure | Social network based information campaigns can be used for promoting beneficial health behaviours and mitigating polarisation (e.g. regarding climate change or vaccines). Network-based intervention strategies typically rely on full knowledge of network structure. It is largely not possible or desirable to obtain population-level social network data due to availability and privacy issues. It is easier to obtain information about individuals' attributes (e.g. age, income), which are jointly informative of an individual's opinions and their social network position. We investigate strategies for influencing the system state in a statistical mechanics based model of opinion formation. Using synthetic and data based examples we illustrate the advantages of implementing coarse-grained influence strategies on Ising models with modular structure in the presence of external fields. Our work provides a scalable methodology for influencing Ising systems on large graphs and the first exploration of the Ising influence problem in the presence of ambient (social) fields. By exploiting the observation that strong ambient fields can simplify control of networked dynamics, our findings open the possibility of efficiently computing and implementing public information campaigns using insights from social network theory without costly or invasive levels of data collection. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 206,037 |
2201.06814 | Leaving No One Behind: A Multi-Scenario Multi-Task Meta Learning
Approach for Advertiser Modeling | Advertisers play an essential role in many e-commerce platforms like Taobao and Amazon. Fulfilling their marketing needs and supporting their business growth is critical to the long-term prosperity of platform economies. However, compared with extensive studies on user modeling such as click-through rate predictions, much less attention has been drawn to advertisers, especially in terms of understanding their diverse demands and performance. Different from user modeling, advertiser modeling generally involves many kinds of tasks (e.g. predictions of advertisers' expenditure, active-rate, or total impressions of promoted products). In addition, major e-commerce platforms often provide multiple marketing scenarios (e.g. Sponsored Search, Display Ads, Live Streaming Ads) while advertisers' behavior tend to be dispersed among many of them. This raises the necessity of multi-task and multi-scenario consideration in comprehensive advertiser modeling, which faces the following challenges: First, one model per scenario or per task simply doesn't scale; Second, it is particularly hard to model new or minor scenarios with limited data samples; Third, inter-scenario correlations are complicated, and may vary given different tasks. To tackle these challenges, we propose a multi-scenario multi-task meta learning approach (M2M) which simultaneously predicts multiple tasks in multiple advertising scenarios. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 275,841 |
2405.00099 | Creative Beam Search: LLM-as-a-Judge For Improving Response Generation | Large language models are revolutionizing several areas, including artificial creativity. However, the process of generation in machines profoundly diverges from that observed in humans. In particular, machine generation is characterized by a lack of intentionality and an underlying creative process. We propose a method called Creative Beam Search that uses Diverse Beam Search and LLM-as-a-Judge to perform response generation and response validation. The results of a qualitative experiment show how our approach can provide better output than standard sampling techniques. We also show that the response validation step is a necessary complement to the response generation step. | true | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 450,791 |
1807.08917 | Panchromatic Sharpening of Remote Sensing Images Using a Multi-scale
Approach | An ideal fusion method preserves the Spectral information in fused image and adds spatial information to it with no spectral distortion. Recently wavelet kalman filter method is proposed which uses ARSIS concept to fuses MS and PAN images. This method is applied in a multiscale version, i.e. the variable index is scale instead of time. With the aim of fusion we present a more detailed study on this model and discuss about rationality of its assumptions such as first order markov model and Gaussian distribution of the posterior density. Finally, we propose a method using wavelet Kalman Particle filter to improve the spectral and spatial quality of the fused image. We show that our model is more consistent with natural MS and PAN images. Visual and statistical analyzes show that the proposed algorithm clearly improves the fusion quality in terms of: correlation coefficient, ERGAS, UIQI, and Q4; compared to other methods including IHS, HMP, PCA, A`trous, udWI, udWPC, Adaptive IHS, Improved Adaptive PCA and wavelet kalman filter. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 103,626 |
0912.4884 | An Invariance Principle for Polytopes | Let X be randomly chosen from {-1,1}^n, and let Y be randomly chosen from the standard spherical Gaussian on R^n. For any (possibly unbounded) polytope P formed by the intersection of k halfspaces, we prove that |Pr [X belongs to P] - Pr [Y belongs to P]| < log^{8/5}k * Delta, where Delta is a parameter that is small for polytopes formed by the intersection of "regular" halfspaces (i.e., halfspaces with low influence). The novelty of our invariance principle is the polylogarithmic dependence on k. Previously, only bounds that were at least linear in k were known. We give two important applications of our main result: (1) A polylogarithmic in k bound on the Boolean noise sensitivity of intersections of k "regular" halfspaces (previous work gave bounds linear in k). (2) A pseudorandom generator (PRG) with seed length O((log n)*poly(log k,1/delta)) that delta-fools all polytopes with k faces with respect to the Gaussian distribution. We also obtain PRGs with similar parameters that fool polytopes formed by intersection of regular halfspaces over the hypercube. Using our PRG constructions, we obtain the first deterministic quasi-polynomial time algorithms for approximately counting the number of solutions to a broad class of integer programs, including dense covering problems and contingency tables. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 5,216 |
2312.10462 | Fusion of Deep and Shallow Features for Face Kinship Verification | Kinship verification from face images is a novel and formidable challenge in the realms of pattern recognition and computer vision. This work makes notable contributions by incorporating a preprocessing technique known as Multiscale Retinex (MSR), which enhances image quality. Our approach harnesses the strength of complementary deep (VGG16) and shallow texture descriptors (BSIF) by combining them at the score level using Logistic Regression (LR) technique. We assess the effectiveness of our approach by conducting comprehensive experiments on three challenging kinship datasets: Cornell Kin Face, UB Kin Face and TS Kin Face | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 416,179 |
2303.10512 | AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning | Fine-tuning large pre-trained language models on downstream tasks has become an important paradigm in NLP. However, common practice fine-tunes all of the parameters in a pre-trained model, which becomes prohibitive when a large number of downstream tasks are present. Therefore, many fine-tuning methods are proposed to learn incremental updates of pre-trained weights in a parameter efficient way, e.g., low-rank increments. These methods often evenly distribute the budget of incremental updates across all pre-trained weight matrices, and overlook the varying importance of different weight parameters. As a consequence, the fine-tuning performance is suboptimal. To bridge this gap, we propose AdaLoRA, which adaptively allocates the parameter budget among weight matrices according to their importance score. In particular, AdaLoRA parameterizes the incremental updates in the form of singular value decomposition. Such a novel approach allows us to effectively prune the singular values of unimportant updates, which is essentially to reduce their parameter budget but circumvent intensive exact SVD computations. We conduct extensive experiments with several pre-trained models on natural language processing, question answering, and natural language generation to validate the effectiveness of AdaLoRA. Results demonstrate that AdaLoRA manifests notable improvement over baselines, especially in the low budget settings. Our code is publicly available at https://github.com/QingruZhang/AdaLoRA . | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 352,480 |
2310.02486 | OCU-Net: A Novel U-Net Architecture for Enhanced Oral Cancer
Segmentation | Accurate detection of oral cancer is crucial for improving patient outcomes. However, the field faces two key challenges: the scarcity of deep learning-based image segmentation research specifically targeting oral cancer and the lack of annotated data. Our study proposes OCU-Net, a pioneering U-Net image segmentation architecture exclusively designed to detect oral cancer in hematoxylin and eosin (H&E) stained image datasets. OCU-Net incorporates advanced deep learning modules, such as the Channel and Spatial Attention Fusion (CSAF) module, a novel and innovative feature that emphasizes important channel and spatial areas in H&E images while exploring contextual information. In addition, OCU-Net integrates other innovative components such as Squeeze-and-Excite (SE) attention module, Atrous Spatial Pyramid Pooling (ASPP) module, residual blocks, and multi-scale fusion. The incorporation of these modules showed superior performance for oral cancer segmentation for two datasets used in this research. Furthermore, we effectively utilized the efficient ImageNet pre-trained MobileNet-V2 model as a backbone of our OCU-Net to create OCU-Netm, an enhanced version achieving state-of-the-art results. Comprehensive evaluation demonstrates that OCU-Net and OCU-Netm outperformed existing segmentation methods, highlighting their precision in identifying cancer cells in H&E images from OCDC and ORCA datasets. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 396,866 |
2207.06987 | Precision Attitude Stabilization with Intermittent External Torque | The attitude stabilization of a micro-satellite employing a variable-amplitude cold gas thruster which reflects as a time varying gain on the control input is considered. Existing literature uses a persistence filter based approach that typically leads to large control gains and torque inputs during specific time intervals corresponding to the 'on' phase of the external actuation. This work aims at reducing the transient spikes placed upon the torque commands by the judicious introduction of an additional time varying scaling signal as part of the control law. The time update mechanism for the new scaling factor and overall closed-loop stability are established through a Lyapunov-like analysis. Numerical simulations highlight the various features of this new control algorithm for spacecraft attitude stabilization subject to torque intermittence. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 308,069 |
2406.11049 | Reconsidering Sentence-Level Sign Language Translation | Historically, sign language machine translation has been posed as a sentence-level task: datasets consisting of continuous narratives are chopped up and presented to the model as isolated clips. In this work, we explore the limitations of this task framing. First, we survey a number of linguistic phenomena in sign languages that depend on discourse-level context. Then as a case study, we perform the first human baseline for sign language translation that actually substitutes a human into the machine learning task framing, rather than provide the human with the entire document as context. This human baseline -- for ASL to English translation on the How2Sign dataset -- shows that for 33% of sentences in our sample, our fluent Deaf signer annotators were only able to understand key parts of the clip in light of additional discourse-level context. These results underscore the importance of understanding and sanity checking examples when adapting machine learning to new domains. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 464,691 |
2305.02215 | Exploring Linguistic Properties of Monolingual BERTs with Typological
Classification among Languages | The impressive achievements of transformers force NLP researchers to delve into how these models represent the underlying structure of natural language. In this paper, we propose a novel standpoint to investigate the above issue: using typological similarities among languages to observe how their respective monolingual models encode structural information. We aim to layer-wise compare transformers for typologically similar languages to observe whether these similarities emerge for particular layers. For this investigation, we propose to use Centered Kernel Alignment to measure similarity among weight matrices. We found that syntactic typological similarity is consistent with the similarity between the weights in the middle layers, which are the pretrained BERT layers to which syntax encoding is generally attributed. Moreover, we observe that a domain adaptation on semantically equivalent texts enhances this similarity among weight matrices. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 361,955 |
2411.12787 | Visual Cue Enhancement and Dual Low-Rank Adaptation for Efficient Visual
Instruction Fine-Tuning | Parameter-efficient fine-tuning multimodal large language models (MLLMs) presents significant challenges, including reliance on high-level visual features that limit fine-grained detail comprehension, and data conflicts that arise from task complexity. To address these issues, we propose an efficient fine-tuning framework with two novel approaches: Vision Cue Enhancement (VCE) and Dual Low-Rank Adaptation (Dual-LoRA). VCE enhances the vision projector by integrating multi-level visual cues, improving the model's ability to capture fine-grained visual features. Dual-LoRA introduces a dual low-rank structure for instruction tuning, decoupling learning into skill and task spaces to enable precise control and efficient adaptation across diverse tasks. Our method simplifies implementation, enhances visual comprehension, and improves adaptability. Experiments on both downstream tasks and general benchmarks demonstrate the effectiveness of our proposed approach. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 509,546 |
2204.11340 | Farmer's Assistant: A Machine Learning Based Application for
Agricultural Solutions | Farmers face several challenges when growing crops like uncertain irrigation, poor soil quality, etc. Especially in India, a major fraction of farmers do not have the knowledge to select appropriate crops and fertilizers. Moreover, crop failure due to disease causes a significant loss to the farmers, as well as the consumers. While there have been recent developments in the automated detection of these diseases using Machine Learning techniques, the utilization of Deep Learning has not been fully explored. Additionally, such models are not easy to use because of the high-quality data used in their training, lack of computational power, and poor generalizability of the models. To this end, we create an open-source easy-to-use web application to address some of these issues which may help improve crop production. In particular, we support crop recommendation, fertilizer recommendation, plant disease prediction, and an interactive news-feed. In addition, we also use interpretability techniques in an attempt to explain the prediction made by our disease detection model. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 293,110 |
1410.2609 | On the Number of RF Chains and Phase Shifters, and Scheduling Design
with Hybrid Analog-Digital Beamforming | This paper considers hybrid beamforming (HB) for downlink multiuser massive multiple input multiple output (MIMO) systems with frequency selective channels. For this system, first we determine the required number of radio frequency (RF) chains and phase shifters (PSs) such that the proposed HB achieves the same performance as that of the digital beamforming (DB) which utilizes $N$ (number of transmitter antennas) RF chains. We show that the performance of the DB can be achieved with our HB just by utilizing $r_t$ RF chains and $2r_t(N-r_t + 1)$ PSs, where $r_t \leq N$ is the rank of the combined digital precoder matrices of all sub-carriers. Second, we provide a simple and novel approach to reduce the number of PSs with only a negligible performance degradation. Numerical results reveal that only $20-40$ PSs per RF chain are sufficient for practically relevant parameter settings. Finally, for the scenario where the deployed number of RF chains $(N_a)$ is less than $r_t$, we propose a simple user scheduling algorithm to select the best set of users in each sub-carrier. Simulation results validate theoretical expressions, and demonstrate the superiority of the proposed HB design over the existing HB designs in both flat fading and frequency selective channels. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 36,630 |
2407.03333 | C-ShipGen: A Conditional Guided Diffusion Model for Parametric Ship Hull
Design | Ship design is a complex design process that may take a team of naval architects many years to complete. Improving the ship design process can lead to significant cost savings, while still delivering high-quality designs to customers. A new technology for ship hull design is diffusion models, a type of generative artificial intelligence. Prior work with diffusion models for ship hull design created high-quality ship hulls with reduced drag and larger displaced volumes. However, the work could not generate hulls that meet specific design constraints. This paper proposes a conditional diffusion model that generates hull designs given specific constraints, such as the desired principal dimensions of the hull. In addition, this diffusion model leverages the gradients from a total resistance regression model to create low-resistance designs. Five design test cases compared the diffusion model to a design optimization algorithm to create hull designs with low resistance. In all five test cases, the diffusion model was shown to create diverse designs with a total resistance less than the optimized hull, having resistance reductions over 25%. The diffusion model also generated these designs without retraining. This work can significantly reduce the design cycle time of ships by creating high-quality hulls that meet user requirements with a data-driven approach. | false | true | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 470,122 |
2306.01103 | Joint Learning of Label and Environment Causal Independence for Graph
Out-of-Distribution Generalization | We tackle the problem of graph out-of-distribution (OOD) generalization. Existing graph OOD algorithms either rely on restricted assumptions or fail to exploit environment information in training data. In this work, we propose to simultaneously incorporate label and environment causal independence (LECI) to fully make use of label and environment information, thereby addressing the challenges faced by prior methods on identifying causal and invariant subgraphs. We further develop an adversarial training strategy to jointly optimize these two properties for causal subgraph discovery with theoretical guarantees. Extensive experiments and analysis show that LECI significantly outperforms prior methods on both synthetic and real-world datasets, establishing LECI as a practical and effective solution for graph OOD generalization. Our code is available at https://github.com/divelab/LECI. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 370,293 |
2106.15292 | Adaptive Sample Selection for Robust Learning under Label Noise | Deep Neural Networks (DNNs) have been shown to be susceptible to memorization or overfitting in the presence of noisily-labelled data. For the problem of robust learning under such noisy data, several algorithms have been proposed. A prominent class of algorithms rely on sample selection strategies wherein, essentially, a fraction of samples with loss values below a certain threshold are selected for training. These algorithms are sensitive to such thresholds, and it is difficult to fix or learn these thresholds. Often, these algorithms also require information such as label noise rates which are typically unavailable in practice. In this paper, we propose an adaptive sample selection strategy that relies only on batch statistics of a given mini-batch to provide robustness against label noise. The algorithm does not have any additional hyperparameters for sample selection, does not need any information on noise rates and does not need access to separate data with clean labels. We empirically demonstrate the effectiveness of our algorithm on benchmark datasets. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 243,689 |
2012.01950 | Co-mining: Self-Supervised Learning for Sparsely Annotated Object
Detection | Object detectors usually achieve promising results with the supervision of complete instance annotations. However, their performance is far from satisfactory with sparse instance annotations. Most existing methods for sparsely annotated object detection either re-weight the loss of hard negative samples or convert the unlabeled instances into ignored regions to reduce the interference of false negatives. We argue that these strategies are insufficient since they can at most alleviate the negative effect caused by missing annotations. In this paper, we propose a simple but effective mechanism, called Co-mining, for sparsely annotated object detection. In our Co-mining, two branches of a Siamese network predict the pseudo-label sets for each other. To enhance multi-view learning and better mine unlabeled instances, the original image and corresponding augmented image are used as the inputs of two branches of the Siamese network, respectively. Co-mining can serve as a general training mechanism applied to most of modern object detectors. Experiments are performed on MS COCO dataset with three different sparsely annotated settings using two typical frameworks: anchor-based detector RetinaNet and anchor-free detector FCOS. Experimental results show that our Co-mining with RetinaNet achieves 1.4%~2.1% improvements compared with different baselines and surpasses existing methods under the same sparsely annotated setting. Code is available at https://github.com/megvii-research/Co-mining. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 209,592 |
2405.15134 | Efficient Biomedical Entity Linking: Clinical Text Standardization with
Low-Resource Techniques | Clinical text is rich in information, with mentions of treatment, medication and anatomy among many other clinical terms. Multiple terms can refer to the same core concepts which can be referred as a clinical entity. Ontologies like the Unified Medical Language System (UMLS) are developed and maintained to store millions of clinical entities including the definitions, relations and other corresponding information. These ontologies are used for standardization of clinical text by normalizing varying surface forms of a clinical term through Biomedical entity linking. With the introduction of transformer-based language models, there has been significant progress in Biomedical entity linking. In this work, we focus on learning through synonym pairs associated with the entities. As compared to the existing approaches, our approach significantly reduces the training data and resource consumption. Moreover, we propose a suite of context-based and context-less reranking techniques for performing the entity disambiguation. Overall, we achieve similar performance to the state-of-the-art zero-shot and distant supervised entity linking techniques on the Medmentions dataset, the largest annotated dataset on UMLS, without any domain-based training. Finally, we show that retrieval performance alone might not be sufficient as an evaluation metric and introduce an article level quantitative and qualitative analysis to reveal further insights on the performance of entity linking methods. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 456,763 |
2111.01865 | Off-Policy Correction for Deep Deterministic Policy Gradient Algorithms
via Batch Prioritized Experience Replay | The experience replay mechanism allows agents to use the experiences multiple times. In prior works, the sampling probability of the transitions was adjusted according to their importance. Reassigning sampling probabilities for every transition in the replay buffer after each iteration is highly inefficient. Therefore, experience replay prioritization algorithms recalculate the significance of a transition when the corresponding transition is sampled to gain computational efficiency. However, the importance level of the transitions changes dynamically as the policy and the value function of the agent are updated. In addition, experience replay stores the transitions are generated by the previous policies of the agent that may significantly deviate from the most recent policy of the agent. Higher deviation from the most recent policy of the agent leads to more off-policy updates, which is detrimental for the agent. In this paper, we develop a novel algorithm, Batch Prioritizing Experience Replay via KL Divergence (KLPER), which prioritizes batch of transitions rather than directly prioritizing each transition. Moreover, to reduce the off-policyness of the updates, our algorithm selects one batch among a certain number of batches and forces the agent to learn through the batch that is most likely generated by the most recent policy of the agent. We combine our algorithm with Deep Deterministic Policy Gradient and Twin Delayed Deep Deterministic Policy Gradient and evaluate it on various continuous control tasks. KLPER provides promising improvements for deep deterministic continuous control algorithms in terms of sample efficiency, final performance, and stability of the policy during the training. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 264,677 |
2112.14023 | The Devil is in the Task: Exploiting Reciprocal Appearance-Localization
Features for Monocular 3D Object Detection | Low-cost monocular 3D object detection plays a fundamental role in autonomous driving, whereas its accuracy is still far from satisfactory. In this paper, we dig into the 3D object detection task and reformulate it as the sub-tasks of object localization and appearance perception, which benefits to a deep excavation of reciprocal information underlying the entire task. We introduce a Dynamic Feature Reflecting Network, named DFR-Net, which contains two novel standalone modules: (i) the Appearance-Localization Feature Reflecting module (ALFR) that first separates taskspecific features and then self-mutually reflects the reciprocal features; (ii) the Dynamic Intra-Trading module (DIT) that adaptively realigns the training processes of various sub-tasks via a self-learning manner. Extensive experiments on the challenging KITTI dataset demonstrate the effectiveness and generalization of DFR-Net. We rank 1st among all the monocular 3D object detectors in the KITTI test set (till March 16th, 2021). The proposed method is also easy to be plug-and-play in many cutting-edge 3D detection frameworks at negligible cost to boost performance. The code will be made publicly available. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 273,432 |
1912.08547 | Cognitive Twins for Supporting Decision-Makings of Internet of Things
Systems | Cognitive Twins (CT) are proposed as Digital Twins (DT) with augmented semantic capabilities for identifying the dynamics of virtual model evolution, promoting the understanding of interrelationships between virtual models and enhancing the decision-making based on DT. The CT ensures that assets of Internet of Things (IoT) systems are well-managed and concerns beyond technical stakeholders are addressed during IoT system development. In this paper, a Knowledge Graph (KG) centric framework is proposed to develop CT. Based on the framework, a future tool-chain is proposed to develop the CT for the initiatives of H2020 project FACTLOG. Based on the comparison between DT and CT, we infer the CT is a more comprehensive approach to support IoT-based systems development than DT. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 157,865 |
2108.07413 | Cross-Image Region Mining with Region Prototypical Network for Weakly
Supervised Segmentation | Weakly supervised image segmentation trained with image-level labels usually suffers from inaccurate coverage of object areas during the generation of the pseudo groundtruth. This is because the object activation maps are trained with the classification objective and lack the ability to generalize. To improve the generality of the objective activation maps, we propose a region prototypical network RPNet to explore the cross-image object diversity of the training set. Similar object parts across images are identified via region feature comparison. Object confidence is propagated between regions to discover new object areas while background regions are suppressed. Experiments show that the proposed method generates more complete and accurate pseudo object masks, while achieving state-of-the-art performance on PASCAL VOC 2012 and MS COCO. In addition, we investigate the robustness of the proposed method on reduced training sets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 250,907 |
2304.13943 | Provably Stabilizing Global-Position Tracking Control for Hybrid Models
of Multi-Domain Bipedal Walking via Multiple Lyapunov Analysis | Accurate control of a humanoid robot's global position (i.e., its three-dimensional position in the world) is critical to the reliable execution of high-risk tasks such as avoiding collision with pedestrians in a crowded environment. This paper introduces a time-based nonlinear control method that achieves accurate global-position tracking (GPT) for multi-domain bipedal walking. Deriving a tracking controller for bipedal robots is challenging due to the highly complex robot dynamics that are time-varying and hybrid, especially for multi-domain walking that involves multiple phases/domains of full actuation, over actuation, and underactuation. To tackle this challenge, we introduce a continuous-phase GPT control law for multi-domain walking, which provably ensures the exponential convergence of the entire error state within the full and over actuation domains and that of the directly regulated error state within the underactuation domain. We then construct sufficient multiple-Lyapunov stability conditions for the hybrid multi-domain tracking error system under the proposed GPT control law. We illustrate the proposed controller design through both three-domain walking with all motors activated and two-domain gait with inactive ankle motors. Simulations of a ROBOTIS OP3 bipedal humanoid robot demonstrate the satisfactory accuracy and convergence rate of the proposed control approach under two different cases of multi-domain walking as well as various walking speeds and desired paths. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 360,767 |
2111.11044 | Exploring Segment-level Semantics for Online Phase Recognition from
Surgical Videos | Automatic surgical phase recognition plays a vital role in robot-assisted surgeries. Existing methods ignored a pivotal problem that surgical phases should be classified by learning segment-level semantics instead of solely relying on frame-wise information. This paper presents a segment-attentive hierarchical consistency network (SAHC) for surgical phase recognition from videos. The key idea is to extract hierarchical high-level semantic-consistent segments and use them to refine the erroneous predictions caused by ambiguous frames. To achieve it, we design a temporal hierarchical network to generate hierarchical high-level segments. Then, we introduce a hierarchical segment-frame attention module to capture relations between the low-level frames and high-level segments. By regularizing the predictions of frames and their corresponding segments via a consistency loss, the network can generate semantic-consistent segments and then rectify the misclassified predictions caused by ambiguous low-level frames. We validate SAHC on two public surgical video datasets, i.e., the M2CAI16 challenge dataset and the Cholec80 dataset. Experimental results show that our method outperforms previous state-of-the-arts and ablation studies prove the effectiveness of our proposed modules. Our code has been released at: https://github.com/xmed-lab/SAHC. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 267,530 |
2011.10284 | ScalarFlow: A Large-Scale Volumetric Data Set of Real-world Scalar
Transport Flows for Computer Animation and Machine Learning | In this paper, we present ScalarFlow, a first large-scale data set of reconstructions of real-world smoke plumes. We additionally propose a framework for accurate physics-based reconstructions from a small number of video streams. Central components of our algorithm are a novel estimation of unseen inflow regions and an efficient regularization scheme. Our data set includes a large number of complex and natural buoyancy-driven flows. The flows transition to turbulent flows and contain observable scalar transport processes. As such, the ScalarFlow data set is tailored towards computer graphics, vision, and learning applications. The published data set will contain volumetric reconstructions of velocity and density, input image sequences, together with calibration data, code, and instructions how to recreate the commodity hardware capture setup. We further demonstrate one of the many potential application areas: a first perceptual evaluation study, which reveals that the complexity of the captured flows requires a huge simulation resolution for regular solvers in order to recreate at least parts of the natural complexity contained in the captured data. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 207,468 |
2008.07962 | Relational Reflection Entity Alignment | Entity alignment aims to identify equivalent entity pairs from different Knowledge Graphs (KGs), which is essential in integrating multi-source KGs. Recently, with the introduction of GNNs into entity alignment, the architectures of recent models have become more and more complicated. We even find two counter-intuitive phenomena within these methods: (1) The standard linear transformation in GNNs is not working well. (2) Many advanced KG embedding models designed for link prediction task perform poorly in entity alignment. In this paper, we abstract existing entity alignment methods into a unified framework, Shape-Builder & Alignment, which not only successfully explains the above phenomena but also derives two key criteria for an ideal transformation operation. Furthermore, we propose a novel GNNs-based method, Relational Reflection Entity Alignment (RREA). RREA leverages Relational Reflection Transformation to obtain relation specific embeddings for each entity in a more efficient way. The experimental results on real-world datasets show that our model significantly outperforms the state-of-the-art methods, exceeding by 5.8%-10.9% on Hits@1. | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | 192,270 |
2305.05253 | Attack Named Entity Recognition by Entity Boundary Interference | Named Entity Recognition (NER) is a cornerstone NLP task while its robustness has been given little attention. This paper rethinks the principles of NER attacks derived from sentence classification, as they can easily violate the label consistency between the original and adversarial NER examples. This is due to the fine-grained nature of NER, as even minor word changes in the sentence can result in the emergence or mutation of any entities, resulting in invalid adversarial examples. To this end, we propose a novel one-word modification NER attack based on a key insight, NER models are always vulnerable to the boundary position of an entity to make their decision. We thus strategically insert a new boundary into the sentence and trigger the Entity Boundary Interference that the victim model makes the wrong prediction either on this boundary word or on other words in the sentence. We call this attack Virtual Boundary Attack (ViBA), which is shown to be remarkably effective when attacking both English and Chinese models with a 70%-90% attack success rate on state-of-the-art language models (e.g. RoBERTa, DeBERTa) and also significantly faster than previous methods. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 363,070 |
1712.04143 | Benchmarking Single Image Dehazing and Beyond | We present a comprehensive study and evaluation of existing single image dehazing algorithms, using a new large-scale benchmark consisting of both synthetic and real-world hazy images, called REalistic Single Image DEhazing (RESIDE). RESIDE highlights diverse data sources and image contents, and is divided into five subsets, each serving different training or evaluation purposes. We further provide a rich variety of criteria for dehazing algorithm evaluation, ranging from full-reference metrics, to no-reference metrics, to subjective evaluation and the novel task-driven evaluation. Experiments on RESIDE shed light on the comparisons and limitations of state-of-the-art dehazing algorithms, and suggest promising future directions. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 86,561 |
1606.02547 | Help, Anyone? A User Study For Modeling Robotic Behavior To Mitigate
Malfunctions With The Help Of The User | Service robots for the domestic environment are intended to autonomously provide support for their users. However, state-of-the-art robots still often get stuck in failure situations leading to breakdowns in the interaction flow from which the robot cannot recover alone. We performed a multi-user Wizard-of-Oz experiment in which we manipulated the robot's behavior in such a way that it appeared unexpected and malfunctioning, and asked participants to help the robot in order to restore the interaction flow. We examined how participants reacted to the robot's error, its subsequent request for help and how it changed their perception of the robot with respect to perceived intelligence, likability, and task contribution. As interaction scenario we used a game of building Lego models performed by user dyads. In total 38 participants interacted with the robot and helped in malfunctioning situations. We report two major findings: (1) in user dyads, the user who gave the last command followed by the user who is closer is more likely to help (2) malfunctions that can be actively fixed by the user seem not to negatively impact perceived intelligence and likability ratings. This work offers insights in how far user support can be a strategy for domestic service robots to recover from repeating malfunctions. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 56,975 |
2009.02609 | Isotonic regression with unknown permutations: Statistics, computation,
and adaptation | Motivated by models for multiway comparison data, we consider the problem of estimating a coordinate-wise isotonic function on the domain $[0, 1]^d$ from noisy observations collected on a uniform lattice, but where the design points have been permuted along each dimension. While the univariate and bivariate versions of this problem have received significant attention, our focus is on the multivariate case $d \geq 3$. We study both the minimax risk of estimation (in empirical $L_2$ loss) and the fundamental limits of adaptation (quantified by the adaptivity index) to a family of piecewise constant functions. We provide a computationally efficient Mirsky partition estimator that is minimax optimal while also achieving the smallest adaptivity index possible for polynomial time procedures. Thus, from a worst-case perspective and in sharp contrast to the bivariate case, the latent permutations in the model do not introduce significant computational difficulties over and above vanilla isotonic regression. On the other hand, the fundamental limits of adaptation are significantly different with and without unknown permutations: Assuming a hardness conjecture from average-case complexity theory, a statistical-computational gap manifests in the former case. In a complementary direction, we show that natural modifications of existing estimators fail to satisfy at least one of the desiderata of optimal worst-case statistical performance, computational efficiency, and fast adaptation. Along the way to showing our results, we improve adaptation results in the special case $d = 2$ and establish some properties of estimators for vanilla isotonic regression, both of which may be of independent interest. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 194,595 |
2011.11440 | The Dynamic of Body and Brain Co-Evolution | We introduce a method that permits to co-evolve the body and the control properties of robots. It can be used to adapt the morphological traits of robots with a hand-designed morphological bauplan or to evolve the morphological bauplan as well. Our results indicate that robots with co-adapted body and control traits outperform robots with fixed hand-designed morphologies. Interestingly, the advantage is not due to the selection of better morphologies but rather to the mutual scaffolding process that results from the possibility to co-adapt the morphological traits to the control traits and vice versa. Our results also demonstrate that morphological variations do not necessarily have destructive effects on robot skills. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | true | false | false | 207,823 |
2501.09338 | Robust UAV Path Planning with Obstacle Avoidance for Emergency Rescue | The unmanned aerial vehicles (UAVs) are efficient tools for diverse tasks such as electronic reconnaissance, agricultural operations and disaster relief. In the complex three-dimensional (3D) environments, the path planning with obstacle avoidance for UAVs is a significant issue for security assurance. In this paper, we construct a comprehensive 3D scenario with obstacles and no-fly zones for dynamic UAV trajectory. Moreover, a novel artificial potential field algorithm coupled with simulated annealing (APF-SA) is proposed to tackle the robust path planning problem. APF-SA modifies the attractive and repulsive potential functions and leverages simulated annealing to escape local minimum and converge to globally optimal solutions. Simulation results demonstrate that the effectiveness of APF-SA, enabling efficient autonomous path planning for UAVs with obstacle avoidance. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 525,106 |
1206.4634 | Artist Agent: A Reinforcement Learning Approach to Automatic Stroke
Generation in Oriental Ink Painting | Oriental ink painting, called Sumi-e, is one of the most appealing painting styles that has attracted artists around the world. Major challenges in computer-based Sumi-e simulation are to abstract complex scene information and draw smooth and natural brush strokes. To automatically find such strokes, we propose to model the brush as a reinforcement learning agent, and learn desired brush-trajectories by maximizing the sum of rewards in the policy search framework. We also provide elaborate design of actions, states, and rewards tailored for a Sumi-e agent. The effectiveness of our proposed approach is demonstrated through simulated Sumi-e experiments. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 16,685 |
2312.01985 | UniGS: Unified Representation for Image Generation and Segmentation | This paper introduces a novel unified representation of diffusion models for image generation and segmentation. Specifically, we use a colormap to represent entity-level masks, addressing the challenge of varying entity numbers while aligning the representation closely with the image RGB domain. Two novel modules, including the location-aware color palette and progressive dichotomy module, are proposed to support our mask representation. On the one hand, a location-aware palette guarantees the colors' consistency to entities' locations. On the other hand, the progressive dichotomy module can efficiently decode the synthesized colormap to high-quality entity-level masks in a depth-first binary search without knowing the cluster numbers. To tackle the issue of lacking large-scale segmentation training data, we employ an inpainting pipeline and then improve the flexibility of diffusion models across various tasks, including inpainting, image synthesis, referring segmentation, and entity segmentation. Comprehensive experiments validate the efficiency of our approach, demonstrating comparable segmentation mask quality to state-of-the-art and adaptability to multiple tasks. The code will be released at \href{https://github.com/qqlu/Entity}{https://github.com/qqlu/Entity}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 412,647 |
1811.12345 | Graph Multiview Canonical Correlation Analysis | Multiview canonical correlation analysis (MCCA) seeks latent low-dimensional representations encountered with multiview data of shared entities (a.k.a. common sources). However, existing MCCA approaches do not exploit the geometry of the common sources, which may be available \emph{a priori}, or can be constructed using certain domain knowledge. This prior information about the common sources can be encoded by a graph, and be invoked as a regularizer to enrich the maximum variance MCCA framework. In this context, the present paper's novel graph-regularized (G) MCCA approach minimizes the distance between the wanted canonical variables and the common low-dimensional representations, while accounting for graph-induced knowledge of the common sources. Relying on a function capturing the extent low-dimensional representations of the multiple views are similar, a generalization bound of GMCCA is established based on Rademacher's complexity. Tailored for setups where the number of data pairs is smaller than the data vector dimensions, a graph-regularized dual MCCA approach is also developed. To further deal with nonlinearities present in the data, graph-regularized kernel MCCA variants are put forward too. Interestingly, solutions of the graph-regularized linear, dual, and kernel MCCA, are all provided in terms of generalized eigenvalue decomposition. Several corroborating numerical tests using real datasets are provided to showcase the merits of the graph-regularized MCCA variants relative to several competing alternatives including MCCA, Laplacian-regularized MCCA, and (graph-regularized) PCA. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 114,996 |
1703.04381 | On the Transformation Capability of Feasible Mechanisms for Programmable
Matter | In this work, we study theoretical models of \emph{programmable matter} systems. The systems under consideration consist of spherical modules, kept together by magnetic forces and able to perform two minimal mechanical operations (or movements): \emph{rotate} around a neighbor and \emph{slide} over a line. In terms of modeling, there are $n$ nodes arranged in a 2-dimensional grid and forming some initial \emph{shape}. The goal is for the initial shape $A$ to \emph{transform} to some target shape $B$ by a sequence of movements. Most of the paper focuses on \emph{transformability} questions, meaning whether it is in principle feasible to transform a given shape to another. We first consider the case in which only rotation is available to the nodes. Our main result is that deciding whether two given shapes $A$ and $B$ can be transformed to each other, is in $\mathbf{P}$. We then insist on rotation only and impose the restriction that the nodes must maintain global connectivity throughout the transformation. We prove that the corresponding transformability question is in $\mathbf{PSPACE}$ and study the problem of determining the minimum \emph{seeds} that can make feasible, otherwise infeasible transformations. Next we allow both rotations and slidings and prove universality: any two connected shapes $A,B$ of the same order, can be transformed to each other without breaking connectivity. The worst-case number of movements of the generic strategy is $\Omega(n^2)$. We improve this to $O(n)$ parallel time, by a pipelining strategy, and prove optimality of both by matching lower bounds. In the last part of the paper, we turn our attention to distributed transformations. The nodes are now distributed processes able to perform communicate-compute-move rounds. We provide distributed algorithms for a general type of transformations. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 69,890 |
2207.01529 | Cybersecurity Discussions in Stack Overflow: A Developer-Centred
Analysis of Engagement and Self-Disclosure Behaviour | Stack Overflow (SO) is a popular platform among developers seeking advice on various software-related topics, including privacy and security. As for many knowledge-sharing websites, the value of SO depends largely on users' engagement, namely their willingness to answer, comment or post technical questions. Still, many of these questions (including cybersecurity-related ones) remain unanswered, putting the site's relevance and reputation into question. Hence, it is important to understand users' participation in privacy and security discussions to promote engagement and foster the exchange of such expertise. Objective: Based on prior findings on online social networks, this work elaborates on the interplay between users' engagement and their privacy practices in SO. Particularly, it analyses developers' self-disclosure behaviour regarding profile visibility and their involvement in discussions related to privacy and security. Method: We followed a mixed-methods approach by (i) analysing SO data from 1239 cybersecurity-tagged questions along with 7048 user profiles, and (ii) conducting an anonymous online survey (N=64). Results: About 33% of the questions we retrieved had no answer, whereas more than 50% had no accepted answer. We observed that "proactive" users tend to disclose significantly less information in their profiles than "reactive" and "unengaged" ones. However, no correlations were found between these engagement categories and privacy-related constructs such as Perceived Control or General Privacy Concerns. Implications: These findings contribute to (i) a better understanding of developers' engagement towards privacy and security topics, and (ii) to shape strategies promoting the exchange of cybersecurity expertise in SO. | true | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | true | 306,210 |
2206.02331 | MASNet:Improve Performance of Siamese Networks with Mutual-attention for
Remote Sensing Change Detection Tasks | Siamese networks are widely used for remote sensing change detection tasks. A vanilla siamese network has two identical feature extraction branches which share weights, these two branches work independently and the feature maps are not fused until about to be sent to a decoder head. However we find that it is critical to exchange information between two feature extraction branches at early stage for change detection task. In this work we present Mutual-Attention Siamese Network (MASNet), a general siamese network with mutual-attention plug-in, so to exchange information between the two feature extraction branches. We show that our modification improve the performance of siamese networks on multi change detection datasets, and it works for both convolutional neural network and visual transformer. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 300,846 |
2406.03652 | Ensembling Portfolio Strategies for Long-Term Investments: A
Distribution-Free Preference Framework for Decision-Making and Algorithms | This paper investigates the problem of ensembling multiple strategies for sequential portfolios to outperform individual strategies in terms of long-term wealth. Due to the uncertainty of strategies' performances in the future market, which are often based on specific models and statistical assumptions, investors often mitigate risk and enhance robustness by combining multiple strategies, akin to common approaches in collective learning prediction. However, the absence of a distribution-free and consistent preference framework complicates decisions of combination due to the ambiguous objective. To address this gap, we introduce a novel framework for decision-making in combining strategies, irrespective of market conditions, by establishing the investor's preference between decisions and then forming a clear objective. Through this framework, we propose a combinatorial strategy construction, free from statistical assumptions, for any scale of component strategies, even infinite, such that it meets the determined criterion. Finally, we test the proposed strategy along with its accelerated variant and some other multi-strategies. The numerical experiments show results in favor of the proposed strategies, albeit with small tradeoffs in their Sharpe ratios, in which their cumulative wealths eventually exceed those of the best component strategies while the accelerated strategy significantly improves performance. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 461,320 |
1804.02339 | Adaptive Three Operator Splitting | We propose and analyze an adaptive step-size variant of the Davis-Yin three operator splitting. This method can solve optimization problems composed by a sum of a smooth term for which we have access to its gradient and an arbitrary number of potentially non-smooth terms for which we have access to their proximal operator. The proposed method sets the step-size based on local information of the objective --hence allowing for larger step-sizes--, only requires two extra function evaluations per iteration and does not depend on any step-size hyperparameter besides an initial estimate. We provide an iteration complexity analysis that matches the best known results for the non-adaptive variant: sublinear convergence for general convex functions and linear convergence under strong convexity of the smooth term and smoothness of one of the proximal terms. Finally, an empirical comparison with related methods on 6 different problems illustrates the computational advantage of the proposed method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 94,383 |
1604.08168 | A "Social Bitcoin" could sustain a democratic digital world | A multidimensional financial system could provide benefits for individuals, companies, and states. Instead of top-down control, which is destined to eventually fail in a hyperconnected world, a bottom-up creation of value can unleash creative potential and drive innovations. Multiple currency dimensions can represent different externalities and thus enable the design of incentives and feedback mechanisms that foster the ability of complex dynamical systems to self-organize and lead to a more resilient society and sustainable economy. Modern information and communication technologies play a crucial role in this process, as Web 2.0 and online social networks promote cooperation and collaboration on unprecedented scales. Within this contribution, we discuss how one dimension of a multidimensional currency system could represent socio-digital capital (Social Bitcoins) that can be generated in a bottom-up way by individuals who perform search and navigation tasks in a future version of the digital world. The incentive to mine Social Bitcoins could sustain digital diversity, which mitigates the risk of totalitarian control by powerful monopolies of information and can create new business opportunities needed in times where a large fraction of current jobs is estimated to disappear due to computerisation. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 55,177 |
2407.13392 | Lightweight Uncertainty Quantification with Simplex Semantic
Segmentation for Terrain Traversability | For navigation of robots, image segmentation is an important component to determining a terrain's traversability. For safe and efficient navigation, it is key to assess the uncertainty of the predicted segments. Current uncertainty estimation methods are limited to a specific choice of model architecture, are costly in terms of training time, require large memory for inference (ensembles), or involve complex model architectures (energy-based, hyperbolic, masking). In this paper, we propose a simple, light-weight module that can be connected to any pretrained image segmentation model, regardless of its architecture, with marginal additional computation cost because it reuses the model's backbone. Our module is based on maximum separation of the segmentation classes by respective prototype vectors. This optimizes the probability that out-of-distribution segments are projected in between the prototype vectors. The uncertainty value in the classification label is obtained from the distance to the nearest prototype. We demonstrate the effectiveness of our module for terrain segmentation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 474,362 |
2412.06869 | Safety Monitoring of Machine Learning Perception Functions: a Survey | Machine Learning (ML) models, such as deep neural networks, are widely applied in autonomous systems to perform complex perception tasks. New dependability challenges arise when ML predictions are used in safety-critical applications, like autonomous cars and surgical robots. Thus, the use of fault tolerance mechanisms, such as safety monitors, is essential to ensure the safe behavior of the system despite the occurrence of faults. This paper presents an extensive literature review on safety monitoring of perception functions using ML in a safety-critical context. In this review, we structure the existing literature to highlight key factors to consider when designing such monitors: threat identification, requirements elicitation, detection of failure, reaction, and evaluation. We also highlight the ongoing challenges associated with safety monitoring and suggest directions for future research. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 515,431 |
2107.14062 | Structure and Performance of Fully Connected Neural Networks: Emerging
Complex Network Properties | Understanding the behavior of Artificial Neural Networks is one of the main topics in the field recently, as black-box approaches have become usual since the widespread of deep learning. Such high-dimensional models may manifest instabilities and weird properties that resemble complex systems. Therefore, we propose Complex Network (CN) techniques to analyze the structure and performance of fully connected neural networks. For that, we build a dataset with 4 thousand models and their respective CN properties. They are employed in a supervised classification setup considering four vision benchmarks. Each neural network is approached as a weighted and undirected graph of neurons and synapses, and centrality measures are computed after training. Results show that these measures are highly related to the network classification performance. We also propose the concept of Bag-Of-Neurons (BoN), a CN-based approach for finding topological signatures linking similar neurons. Results suggest that six neuronal types emerge in such networks, independently of the target domain, and are distributed differently according to classification accuracy. We also tackle specific CN properties related to performance, such as higher subgraph centrality on lower-performing models. Our findings suggest that CN properties play a critical role in the performance of fully connected neural networks, with topological patterns emerging independently on a wide range of models. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 248,369 |
2404.01053 | HAHA: Highly Articulated Gaussian Human Avatars with Textured Mesh Prior | We present HAHA - a novel approach for animatable human avatar generation from monocular input videos. The proposed method relies on learning the trade-off between the use of Gaussian splatting and a textured mesh for efficient and high fidelity rendering. We demonstrate its efficiency to animate and render full-body human avatars controlled via the SMPL-X parametric model. Our model learns to apply Gaussian splatting only in areas of the SMPL-X mesh where it is necessary, like hair and out-of-mesh clothing. This results in a minimal number of Gaussians being used to represent the full avatar, and reduced rendering artifacts. This allows us to handle the animation of small body parts such as fingers that are traditionally disregarded. We demonstrate the effectiveness of our approach on two open datasets: SnapshotPeople and X-Humans. Our method demonstrates on par reconstruction quality to the state-of-the-art on SnapshotPeople, while using less than a third of Gaussians. HAHA outperforms previous state-of-the-art on novel poses from X-Humans both quantitatively and qualitatively. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 443,229 |
2212.13904 | A Novel Self-Supervised Learning-Based Anomaly Node Detection Method
Based on an Autoencoder in Wireless Sensor Networks | Due to the issue that existing wireless sensor network (WSN)-based anomaly detection methods only consider and analyze temporal features, in this paper, a self-supervised learning-based anomaly node detection method based on an autoencoder is designed. This method integrates temporal WSN data flow feature extraction, spatial position feature extraction and intermodal WSN correlation feature extraction into the design of the autoencoder to make full use of the spatial and temporal information of the WSN for anomaly detection. First, a fully connected network is used to extract the temporal features of nodes by considering a single mode from a local spatial perspective. Second, a graph neural network (GNN) is used to introduce the WSN topology from a global spatial perspective for anomaly detection and extract the spatial and temporal features of the data flows of nodes and their neighbors by considering a single mode. Then, the adaptive fusion method involving weighted summation is used to extract the relevant features between different models. In addition, this paper introduces a gated recurrent unit (GRU) to solve the long-term dependence problem of the time dimension. Eventually, the reconstructed output of the decoder and the hidden layer representation of the autoencoder are fed into a fully connected network to calculate the anomaly probability of the current system. Since the spatial feature extraction operation is advanced, the designed method can be applied to the task of large-scale network anomaly detection by adding a clustering operation. Experiments show that the designed method outperforms the baselines, and the F1 score reaches 90.6%, which is 5.2% higher than those of the existing anomaly detection methods based on unsupervised reconstruction and prediction. Code and model are available at https://github.com/GuetYe/anomaly_detection/GLSL | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 338,432 |
2410.23029 | Planning and Learning in Risk-Aware Restless Multi-Arm Bandit Problem | In restless multi-arm bandits, a central agent is tasked with optimally distributing limited resources across several bandits (arms), with each arm being a Markov decision process. In this work, we generalize the traditional restless multi-arm bandit problem with a risk-neutral objective by incorporating risk-awareness. We establish indexability conditions for the case of a risk-aware objective and provide a solution based on Whittle index. In addition, we address the learning problem when the true transition probabilities are unknown by proposing a Thompson sampling approach and show that it achieves bounded regret that scales sublinearly with the number of episodes and quadratically with the number of arms. The efficacy of our method in reducing risk exposure in restless multi-arm bandits is illustrated through a set of numerical experiments. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 503,871 |
1302.6574 | Energy and Sampling Constrained Asynchronous Communication | The minimum energy, and, more generally, the minimum cost, to transmit one bit of information has been recently derived for bursty communication when information is available infrequently at random times at the transmitter. This result assumes that the receiver is always in the listening mode and samples all channel outputs until it makes a decision. If the receiver is constrained to sample only a fraction f>0 of the channel outputs, what is the cost penalty due to sparse output sampling? Remarkably, there is no penalty: regardless of f>0 the asynchronous capacity per unit cost is the same as under full sampling, ie, when f=1. There is not even a penalty in terms of decoding delay---the elapsed time between when information is available until when it is decoded. This latter result relies on the possibility to sample adaptively; the next sample can be chosen as a function of past samples. Under non-adaptive sampling, it is possible to achieve the full sampling asynchronous capacity per unit cost, but the decoding delay gets multiplied by 1/f. Therefore adaptive sampling strategies are of particular interest in the very sparse sampling regime. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 22,389 |
1201.3745 | Social Networks Research Aspects: A Vast and Fast Survey Focused on the
Issue of Privacy in Social Network Sites | The increasing participation of people in online activities in recent years like content publishing, and having different kinds of relationships and interactions, along with the emergence of online social networks and people's extensive tendency toward them, have resulted in generation and availability of a huge amount of valuable information that has never been available before, and have introduced some new, attractive, varied, and useful research areas to researchers. In this paper we try to review some of the accomplished research on information of SNSs (Social Network Sites), and introduce some of the attractive applications that analyzing this information has. This will lead to the introduction of some new research areas to researchers. By reviewing the research in this area we will present a categorization of research topics about online social networks. This categorization includes seventeen research subtopics or subareas that will be introduced along with some of the accomplished research in these subareas. According to the consequences (slight, significant, and sometimes catastrophic) that revelation of personal and private information has, a research area that researchers have vastly investigated is privacy in online social networks. After an overview on different research subareas of SNSs, we will get more focused on the subarea of privacy protection in social networks, and introduce different aspects of it along with a categorization of these aspects. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 13,870 |
2201.11176 | DiscoScore: Evaluating Text Generation with BERT and Discourse Coherence | Recently, there has been a growing interest in designing text generation systems from a discourse coherence perspective, e.g., modeling the interdependence between sentences. Still, recent BERT-based evaluation metrics are weak in recognizing coherence, and thus are not reliable in a way to spot the discourse-level improvements of those text generation systems. In this work, we introduce DiscoScore, a parametrized discourse metric, which uses BERT to model discourse coherence from different perspectives, driven by Centering theory. Our experiments encompass 16 non-discourse and discourse metrics, including DiscoScore and popular coherence models, evaluated on summarization and document-level machine translation (MT). We find that (i) the majority of BERT-based metrics correlate much worse with human rated coherence than early discourse metrics, invented a decade ago; (ii) the recent state-of-the-art BARTScore is weak when operated at system level -- which is particularly problematic as systems are typically compared in this manner. DiscoScore, in contrast, achieves strong system-level correlation with human ratings, not only in coherence but also in factual consistency and other aspects, and surpasses BARTScore by over 10 correlation points on average. Further, aiming to understand DiscoScore, we provide justifications to the importance of discourse coherence for evaluation metrics, and explain the superiority of one variant over another. Our code is available at \url{https://github.com/AIPHES/DiscoScore}. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 277,206 |
2108.11838 | Geometry Based Machining Feature Retrieval with Inductive Transfer
Learning | Manufacturing industries have widely adopted the reuse of machine parts as a method to reduce costs and as a sustainable manufacturing practice. Identification of reusable features from the design of the parts and finding their similar features from the database is an important part of this process. In this project, with the help of fully convolutional geometric features, we are able to extract and learn the high level semantic features from CAD models with inductive transfer learning. The extracted features are then compared with that of other CAD models from the database using Frobenius norm and identical features are retrieved. Later we passed the extracted features to a deep convolutional neural network with a spatial pyramid pooling layer and the performance of the feature retrieval increased significantly. It was evident from the results that the model could effectively capture the geometrical elements from machining features. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 252,311 |
1908.06842 | Performance Analysis of Cooperative V2V and V2I Communications under
Correlated Fading | Cooperative vehicular networks will play a vital role in the coming years to implement various intelligent transportation-related applications. Both vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communications will be needed to reliably disseminate information in a vehicular network. In this regard, a roadside unit (RSU) equipped with multiple antennas can improve the network capacity. While the traditional approaches assume antennas to experience independent fading, we consider a more practical uplink scenario where antennas at the RSU experience correlated fading. In particular, we evaluate the packet error probability for two renowned antenna correlation models, i.e., constant correlation (CC) and exponential correlation (EC). We also consider intermediate cooperative vehicles for reliable communication between the source vehicle and the RSU. Here, we derive closed-form expressions for packet error probability which help quantify the performance variations due to fading parameter, correlation coefficients and the number of intermediate helper vehicles. To evaluate the optimal transmit power in this network scenario, we formulate a Stackelberg game, wherein, the source vehicle is treated as a buyer and the helper vehicles are the sellers. The optimal solutions for the asking price and the transmit power are devised which maximize the utility functions of helper vehicles and the source vehicle, respectively. We verify our mathematical derivations by extensive simulations in MATLAB. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 142,125 |
1910.01382 | Silas: High Performance, Explainable and Verifiable Machine Learning | This paper introduces a new classification tool named Silas, which is built to provide a more transparent and dependable data analytics service. A focus of Silas is on providing a formal foundation of decision trees in order to support logical analysis and verification of learned prediction models. This paper describes the distinct features of Silas: The Model Audit module formally verifies the prediction model against user specifications, the Enforcement Learning module trains prediction models that are guaranteed correct, the Model Insight and Prediction Insight modules reason about the prediction model and explain the decision-making of predictions. We also discuss implementation details ranging from programming paradigm to memory management that help achieve high-performance computation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 147,922 |
2410.20163 | UniHGKR: Unified Instruction-aware Heterogeneous Knowledge Retrievers | Existing information retrieval (IR) models often assume a homogeneous structure for knowledge sources and user queries, limiting their applicability in real-world settings where retrieval is inherently heterogeneous and diverse. In this paper, we introduce UniHGKR, a unified instruction-aware heterogeneous knowledge retriever that (1) builds a unified retrieval space for heterogeneous knowledge and (2) follows diverse user instructions to retrieve knowledge of specified types. UniHGKR consists of three principal stages: heterogeneous self-supervised pretraining, text-anchored embedding alignment, and instruction-aware retriever fine-tuning, enabling it to generalize across varied retrieval contexts. This framework is highly scalable, with a BERT-based version and a UniHGKR-7B version trained on large language models. Also, we introduce CompMix-IR, the first native heterogeneous knowledge retrieval benchmark. It includes two retrieval scenarios with various instructions, over 9,400 question-answer (QA) pairs, and a corpus of 10 million entries, covering four different types of data. Extensive experiments show that UniHGKR consistently outperforms state-of-the-art methods on CompMix-IR, achieving up to 6.36% and 54.23% relative improvements in two scenarios, respectively. Finally, by equipping our retriever for open-domain heterogeneous QA systems, we achieve a new state-of-the-art result on the popular ConvMix task, with an absolute improvement of up to 5.90 points. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 502,678 |
2203.02480 | Didn't see that coming: a survey on non-verbal social human behavior
forecasting | Non-verbal social human behavior forecasting has increasingly attracted the interest of the research community in recent years. Its direct applications to human-robot interaction and socially-aware human motion generation make it a very attractive field. In this survey, we define the behavior forecasting problem for multiple interactive agents in a generic way that aims at unifying the fields of social signals prediction and human motion forecasting, traditionally separated. We hold that both problem formulations refer to the same conceptual problem, and identify many shared fundamental challenges: future stochasticity, context awareness, history exploitation, etc. We also propose a taxonomy that comprises methods published in the last 5 years in a very informative way and describes the current main concerns of the community with regard to this problem. In order to promote further research on this field, we also provide a summarised and friendly overview of audiovisual datasets featuring non-acted social interactions. Finally, we describe the most common metrics used in this task and their particular issues. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 283,757 |
2104.09338 | How modular structure determines operational resilience of power grids | The synchronization stability has been analyzed as one of the important dynamical characteristics of power grids. In this study, we bring the operational perspective to the synchronization stability analysis by counting not only full but also partial synchronization between nodes. To do so, we introduce two distinct measures that estimate the operational resilience of power-grid nodes: functional stability and functional resistance. We demonstrate the practical applicability of the measures in a model network motif and an IEEE test power grid. As a case study of German power grid, we reveal that the modular structure of a power grid and particular unidirectional current flow determine the distribution of the operational resilience of power-grid nodes. Reproducing our finding on clustered benchmark networks, we validate the modular effect on power grid stability and confirm that our measures can be the insightful tools to understand the power grids' synchronization dynamics. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 231,197 |
1503.06782 | Massive MIMO as a Big Data System: Random Matrix Models and Testbed | The paper has two parts. The first one deals with how to use large random matrices as building blocks to model the massive data arising from the massive (or large-scale) MIMO system. As a result, we apply this model for distributed spectrum sensing and network monitoring. The part boils down to the streaming, distributed massive data, for which a new algorithm is obtained and its performance is derived using the central limit theorem that is recently obtained in the literature. The second part deals with the large-scale testbed using software-defined radios (particularly USRP) that takes us more than four years to develop this 70-node network testbed. To demonstrate the power of the software defined radio, we reconfigure our testbed quickly into a testbed for massive MIMO. The massive data of this testbed is of central interest in this paper. It is for the first time for us to model the experimental data arising from this testbed. To our best knowledge, we are not aware of other similar work. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 41,400 |
2309.03224 | No Train Still Gain. Unleash Mathematical Reasoning of Large Language
Models with Monte Carlo Tree Search Guided by Energy Function | Large language models (LLMs) demonstrate impressive language understanding and contextual learning abilities, making them suitable for natural language processing (NLP) tasks and complex mathematical reasoning. However, when applied to mathematical reasoning tasks, LLMs often struggle to generate correct reasoning steps and answers despite having high probabilities for the solutions. To overcome this limitation and enhance the mathematical reasoning capabilities of fine-tuned LLMs without additional fine-tuning steps, we propose a method that incorporates Monte Carlo Tree Search (MCTS) and a lightweight energy function to rank decision steps and enable immediate reaction and precise reasoning. Specifically, we re-formulate the fine-tuned LLMs into a Residual-based Energy Model (Residual-EBM) and employ noise contrastive estimation to estimate the energy function's parameters. We then utilize MCTS with the energy function as a path verifier to search the output space and evaluate the reasoning path. Through extensive experiments on two mathematical reasoning benchmarks, GSM8k and AQUA-RAT, we demonstrate the exceptional capabilities of our method, which significantly improves the pass@1 metric of the fine-tuned model without requiring additional fine-tuning or reinforcement learning with human feedback alignment. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 390,316 |
1811.12016 | 3D Shape Reconstruction from a Single 2D Image via 2D-3D
Self-Consistency | Aiming at inferring 3D shapes from 2D images, 3D shape reconstruction has drawn huge attention from researchers in computer vision and deep learning communities. However, it is not practical to assume that 2D input images and their associated ground truth 3D shapes are always available during training. In this paper, we propose a framework for semi-supervised 3D reconstruction. This is realized by our introduced 2D-3D self-consistency, which aligns the predicted 3D models and the projected 2D foreground segmentation masks. Moreover, our model not only enables recovering 3D shapes with the corresponding 2D masks, camera pose information can be jointly disentangled and predicted, even such supervision is never available during training. In the experiments, we qualitatively and quantitatively demonstrate the effectiveness of our model, which performs favorably against state-of-the-art approaches in either supervised or semi-supervised settings. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 114,913 |
1610.03936 | A framework for analyzing contagion in assortative banking networks | We introduce a probabilistic framework that represents stylized banking networks with the aim of predicting the size of contagion events. Most previous work on random financial networks assumes independent connections between banks, whereas our framework explicitly allows for (dis)assortative edge probabilities (e.g., a tendency for small banks to link to large banks). We analyze default cascades triggered by shocking the network and find that the cascade can be understood as an explicit iterated mapping on a set of edge probabilities that converges to a fixed point. We derive a cascade condition that characterizes whether or not an infinitesimal shock to the network can grow to a finite size cascade, in analogy to the basic reproduction number $R_0$ in epidemic modelling. The cascade condition provides an easily computed measure of the systemic risk inherent in a given banking network topology. Using the percolation theory for random networks we also derive an analytic formula for the frequency of global cascades. Although the analytical methods are derived for infinite networks, we demonstrate using Monte Carlo simulations the applicability of the results to finite-sized networks. We show that edge-assortativity, the propensity of nodes to connect to similar nodes, can have a strong effect on the level of systemic risk as measured by the cascade condition. However, the effect of assortativity on systemic risk is subtle, and we propose a simple graph theoretic quantity, which we call the graph-assortativity coefficient, that can be used to assess systemic risk. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 62,318 |
2406.11640 | Linear Bellman Completeness Suffices for Efficient Online Reinforcement
Learning with Few Actions | One of the most natural approaches to reinforcement learning (RL) with function approximation is value iteration, which inductively generates approximations to the optimal value function by solving a sequence of regression problems. To ensure the success of value iteration, it is typically assumed that Bellman completeness holds, which ensures that these regression problems are well-specified. We study the problem of learning an optimal policy under Bellman completeness in the online model of RL with linear function approximation. In the linear setting, while statistically efficient algorithms are known under Bellman completeness (e.g., Jiang et al. (2017); Zanette et al. (2020)), these algorithms all rely on the principle of global optimism which requires solving a nonconvex optimization problem. In particular, it has remained open as to whether computationally efficient algorithms exist. In this paper we give the first polynomial-time algorithm for RL under linear Bellman completeness when the number of actions is any constant. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 464,977 |
1107.0268 | Simple Algorithm Portfolio for SAT | The importance of algorithm portfolio techniques for SAT has long been noted, and a number of very successful systems have been devised, including the most successful one --- SATzilla. However, all these systems are quite complex (to understand, reimplement, or modify). In this paper we propose a new algorithm portfolio for SAT that is extremely simple, but in the same time so efficient that it outperforms SATzilla. For a new SAT instance to be solved, our portfolio finds its k-nearest neighbors from the training set and invokes a solver that performs the best at those instances. The main distinguishing feature of our algorithm portfolio is the locality of the selection procedure --- the selection of a SAT solver is based only on few instances similar to the input one. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 11,140 |
2008.00783 | Attribute-aware Diversification for Sequential Recommendations | Users prefer diverse recommendations over homogeneous ones. However, most previous work on Sequential Recommenders does not consider diversity, and strives for maximum accuracy, resulting in homogeneous recommendations. In this paper, we consider both accuracy and diversity by presenting an Attribute-aware Diversifying Sequential Recommender (ADSR). Specifically, ADSR utilizes available attribute information when modeling a user's sequential behavior to simultaneously learn the user's most likely item to interact with, and their preference of attributes. Then, ADSR diversifies the recommended items based on the predicted preference for certain attributes. Experiments on two benchmark datasets demonstrate that ADSR can effectively provide diverse recommendations while maintaining accuracy. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 190,110 |
1611.01673 | Generative Multi-Adversarial Networks | Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple discriminators. In previous work, the successful training of GANs requires modifying the minimax objective to accelerate training early on. In contrast, GMAN can be reliably trained with the original, untampered objective. We explore a number of design perspectives with the discriminator role ranging from formidable adversary to forgiving teacher. Image generation tasks comparing the proposed framework to standard GANs demonstrate GMAN produces higher quality samples in a fraction of the iterations when measured by a pairwise GAM-type metric. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | true | false | false | 63,417 |
2001.00579 | A Comparative Evaluation of Pitch Modification Techniques | This paper addresses the problem of pitch modification, as an important module for an efficient voice transformation system. The Deterministic plus Stochastic Model of the residual signal we proposed in a previous work is compared to TDPSOLA, HNM and STRAIGHT. The four methods are compared through an important subjective test. The influence of the speaker gender and of the pitch modification ratio is analyzed. Despite its higher compression level, the DSM technique is shown to give similar or better results than other methods, especially for male speakers and important ratios of modification. The DSM turns out to be only outperformed by STRAIGHT for female voices. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 159,259 |
2407.15249 | Hurricane Evacuation Analysis with Large-scale Mobile Device Location
Data during Hurricane Ian | Hurricane Ian is the deadliest and costliest hurricane in Florida's history, with 2.5 million people ordered to evacuate. As we witness increasingly severe hurricanes in the context of climate change, mobile device location data offers an unprecedented opportunity to study hurricane evacuation behaviors. With a terabyte-level GPS dataset, we introduce a holistic hurricane evacuation behavior algorithm with a case study of Ian: we infer evacuees' departure time and categorize them into different behavioral groups, including self, voluntary, mandatory, shadow and in-zone evacuees. Results show the landfall area (Fort Myers, Lee County) had lower out-of-zone but higher overall evacuation rate, while the predicted landfall area (Tampa, Hillsborough County) had the opposite, suggesting the effects of delayed evacuation order. Out-of-zone evacuation rates would increase from shore to inland. Spatiotemporal analysis identified three evacuation waves: during formation, before landfall, and after landfall. These insights are valuable for enhancing future disaster planning and management. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 475,100 |
1906.03492 | Improving Low-Resource Cross-lingual Document Retrieval by Reranking
with Deep Bilingual Representations | In this paper, we propose to boost low-resource cross-lingual document retrieval performance with deep bilingual query-document representations. We match queries and documents in both source and target languages with four components, each of which is implemented as a term interaction-based deep neural network with cross-lingual word embeddings as input. By including query likelihood scores as extra features, our model effectively learns to rerank the retrieved documents by using a small number of relevance labels for low-resource language pairs. Due to the shared cross-lingual word embedding space, the model can also be directly applied to another language pair without any training label. Experimental results on the MATERIAL dataset show that our model outperforms the competitive translation-based baselines on English-Swahili, English-Tagalog, and English-Somali cross-lingual information retrieval tasks. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 134,391 |
2202.12076 | Phrase-Based Affordance Detection via Cyclic Bilateral Interaction | Affordance detection, which refers to perceiving objects with potential action possibilities in images, is a challenging task since the possible affordance depends on the person's purpose in real-world application scenarios. The existing works mainly extract the inherent human-object dependencies from image/video to accommodate affordance properties that change dynamically. In this paper, we explore to perceive affordance from a vision-language perspective and consider the challenging phrase-based affordance detection problem,i.e., given a set of phrases describing the action purposes, all the object regions in a scene with the same affordance should be detected. To this end, we propose a cyclic bilateral consistency enhancement network (CBCE-Net) to align language and vision features progressively. Specifically, the presented CBCE-Net consists of a mutual guided vision-language module that updates the common features of vision and language in a progressive manner, and a cyclic interaction module (CIM) that facilitates the perception of possible interaction with objects in a cyclic manner. In addition, we extend the public Purpose-driven Affordance Dataset (PAD) by annotating affordance categories with short phrases. The contrastive experimental results demonstrate the superiority of our method over nine typical methods from four relevant fields in terms of both objective metrics and visual quality. The related code and dataset will be released at \url{https://github.com/lulsheng/CBCE-Net}. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 282,101 |
2409.12658 | Exploring the topics, sentiments and hate speech in the Spanish
information environment | In the digital era, the internet and social media have transformed communication but have also facilitated the spread of hate speech and disinformation, leading to radicalization, polarization, and toxicity. This is especially concerning for media outlets due to their significant role in shaping public discourse. This study examines the topics, sentiments, and hate prevalence in 337,807 response messages (website comments and tweets) to news from five Spanish media outlets (La Vanguardia, ABC, El Pa\'is, El Mundo, and 20 Minutos) in January 2021. These public reactions were originally labeled as distinct types of hate by experts following an original procedure, and they are now classified into three sentiment values (negative, neutral, or positive) and main topics. The BERTopic unsupervised framework was used to extract 81 topics, manually named with the help of Large Language Models (LLMs) and grouped into nine primary categories. Results show social issues (22.22%), expressions and slang (20.35%), and political issues (11.80%) as the most discussed. Content is mainly negative (62.7%) and neutral (28.57%), with low positivity (8.73%). Toxic narratives relate to conversation expressions, gender, feminism, and COVID-19. Despite low levels of hate speech (3.98%), the study confirms high toxicity in online responses to social and political topics. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 489,672 |
2410.22475 | Ethical Statistical Practice and Ethical AI | Artificial Intelligence (AI) is a field that utilizes computing and often, data and statistics, intensively together to solve problems or make predictions. AI has been evolving with literally unbelievable speed over the past few years, and this has led to an increase in social, cultural, industrial, scientific, and governmental concerns about the ethical development and use of AI systems worldwide. The ASA has issued a statement on ethical statistical practice and AI (ASA, 2024), which echoes similar statements from other groups. Here we discuss the support for ethical statistical practice and ethical AI that has been established in long-standing human rights law and ethical practice standards for computing and statistics. There are multiple sources of support for ethical statistical practice and ethical AI deriving from these source documents, which are critical for strengthening the operationalization of the "Statement on Ethical AI for Statistics Practitioners". These resources are explicated for interested readers to utilize to guide their development and use of AI in, and through, their statistical practice. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 503,642 |
1708.04649 | Machine Learning for Survival Analysis: A Survey | Accurately predicting the time of occurrence of an event of interest is a critical problem in longitudinal data analysis. One of the main challenges in this context is the presence of instances whose event outcomes become unobservable after a certain time point or when some instances do not experience any event during the monitoring period. Such a phenomenon is called censoring which can be effectively handled using survival analysis techniques. Traditionally, statistical approaches have been widely developed in the literature to overcome this censoring issue. In addition, many machine learning algorithms are adapted to effectively handle survival data and tackle other challenging problems that arise in real-world data. In this survey, we provide a comprehensive and structured review of the representative statistical methods along with the machine learning techniques used in survival analysis and provide a detailed taxonomy of the existing methods. We also discuss several topics that are closely related to survival analysis and illustrate several successful applications in various real-world application domains. We hope that this paper will provide a more thorough understanding of the recent advances in survival analysis and offer some guidelines on applying these approaches to solve new problems that arise in applications with censored data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 78,985 |
2212.10352 | Fixed-Weight Difference Target Propagation | Target Propagation (TP) is a biologically more plausible algorithm than the error backpropagation (BP) to train deep networks, and improving practicality of TP is an open issue. TP methods require the feedforward and feedback networks to form layer-wise autoencoders for propagating the target values generated at the output layer. However, this causes certain drawbacks; e.g., careful hyperparameter tuning is required to synchronize the feedforward and feedback training, and frequent updates of the feedback path are usually required than that of the feedforward path. Learning of the feedforward and feedback networks is sufficient to make TP methods capable of training, but is having these layer-wise autoencoders a necessary condition for TP to work? We answer this question by presenting Fixed-Weight Difference Target Propagation (FW-DTP) that keeps the feedback weights constant during training. We confirmed that this simple method, which naturally resolves the abovementioned problems of TP, can still deliver informative target values to hidden layers for a given task; indeed, FW-DTP consistently achieves higher test performance than a baseline, the Difference Target Propagation (DTP), on four classification datasets. We also present a novel propagation architecture that explains the exact form of the feedback function of DTP to analyze FW-DTP. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 337,424 |
2103.08070 | Learning robust driving policies without online exploration | We propose a multi-time-scale predictive representation learning method to efficiently learn robust driving policies in an offline manner that generalize well to novel road geometries, and damaged and distracting lane conditions which are not covered in the offline training data. We show that our proposed representation learning method can be applied easily in an offline (batch) reinforcement learning setting demonstrating the ability to generalize well and efficiently under novel conditions compared to standard batch RL methods. Our proposed method utilizes training data collected entirely offline in the real-world which removes the need of intensive online explorations that impede applying deep reinforcement learning on real-world robot training. Various experiments were conducted in both simulator and real-world scenarios for the purpose of evaluation and analysis of our proposed claims. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 224,785 |
2407.08484 | Learning Localization of Body and Finger Animation Skeleton Joints on
Three-Dimensional Models of Human Bodies | Contemporary approaches to solving various problems that require analyzing three-dimensional (3D) meshes and point clouds have adopted the use of deep learning algorithms that directly process 3D data such as point coordinates, normal vectors and vertex connectivity information. Our work proposes one such solution to the problem of positioning body and finger animation skeleton joints within 3D models of human bodies. Due to scarcity of annotated real human scans, we resort to generating synthetic samples while varying their shape and pose parameters. Similarly to the state-of-the-art approach, our method computes each joint location as a convex combination of input points. Given only a list of point coordinates and normal vector estimates as input, a dynamic graph convolutional neural network is used to predict the coefficients of the convex combinations. By comparing our method with the state-of-the-art, we show that it is possible to achieve significantly better results with a simpler architecture, especially for finger joints. Since our solution requires fewer precomputed features, it also allows for shorter processing times. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 472,183 |
2111.02314 | Online Learning of Energy Consumption for Navigation of Electric
Vehicles | Energy efficient navigation constitutes an important challenge in electric vehicles, due to their limited battery capacity. We employ a Bayesian approach to model the energy consumption at road segments for efficient navigation. In order to learn the model parameters, we develop an online learning framework and investigate several exploration strategies such as Thompson Sampling and Upper Confidence Bound. We then extend our online learning framework to the multi-agent setting, where multiple vehicles adaptively navigate and learn the parameters of the energy model. We analyze Thompson Sampling and establish rigorous regret bounds on its performance in the single-agent and multi-agent settings, through an analysis of the algorithm under batched feedback. Finally, we demonstrate the performance of our methods via experiments on several real-world city road networks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 264,831 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.