id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2408.07724 | "Normalized Stress" is Not Normalized: How to Interpret Stress Correctly | Stress is among the most commonly employed quality metrics and optimization criteria for dimension reduction projections of high dimensional data. Complex, high dimensional data is ubiquitous across many scientific disciplines, including machine learning, biology, and the social sciences. One of the primary methods of visualizing these datasets is with two dimensional scatter plots that visually capture some properties of the data. Because visually determining the accuracy of these plots is challenging, researchers often use quality metrics to measure projection accuracy or faithfulness to the full data. One of the most commonly employed metrics, normalized stress, is sensitive to uniform scaling of the projection, despite this act not meaningfully changing anything about the projection. We investigate the effect of scaling on stress and other distance based quality metrics analytically and empirically by showing just how much the values change and how this affects dimension reduction technique evaluations. We introduce a simple technique to make normalized stress scale invariant and show that it accurately captures expected behavior on a small benchmark. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 480,702 |
2208.03624 | Graph R-CNN: Towards Accurate 3D Object Detection with
Semantic-Decorated Local Graph | Two-stage detectors have gained much popularity in 3D object detection. Most two-stage 3D detectors utilize grid points, voxel grids, or sampled keypoints for RoI feature extraction in the second stage. Such methods, however, are inefficient in handling unevenly distributed and sparse outdoor points. This paper solves this problem in three aspects. 1) Dynamic Point Aggregation. We propose the patch search to quickly search points in a local region for each 3D proposal. The dynamic farthest voxel sampling is then applied to evenly sample the points. Especially, the voxel size varies along the distance to accommodate the uneven distribution of points. 2) RoI-graph Pooling. We build local graphs on the sampled points to better model contextual information and mine point relations through iterative message passing. 3) Visual Features Augmentation. We introduce a simple yet effective fusion strategy to compensate for sparse LiDAR points with limited semantic cues. Based on these modules, we construct our Graph R-CNN as the second stage, which can be applied to existing one-stage detectors to consistently improve the detection performance. Extensive experiments show that Graph R-CNN outperforms the state-of-the-art 3D detection models by a large margin on both the KITTI and Waymo Open Dataset. And we rank first place on the KITTI BEV car detection leaderboard. Code will be available at \url{https://github.com/Nightmare-n/GraphRCNN}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 311,847 |
1608.04171 | Power Data Classification: A Hybrid of a Novel Local Time Warping and
LSTM | In this paper, for the purpose of data centre energy consumption monitoring and analysis, we propose to detect the running programs in a server by classifying the observed power consumption series. Time series classification problem has been extensively studied with various distance measurements developed; also recently the deep learning based sequence models have been proved to be promising. In this paper, we propose a novel distance measurement and build a time series classification algorithm hybridizing nearest neighbour and long short term memory (LSTM) neural network. More specifically, first we propose a new distance measurement termed as Local Time Warping (LTW), which utilizes a user-specified set for local warping, and is designed to be non-commutative and non-dynamic programming. Second we hybridize the 1NN-LTW and LSTM together. In particular, we combine the prediction probability vector of 1NN-LTW and LSTM to determine the label of the test cases. Finally, using the power consumption data from a real data center, we show that the proposed LTW can improve the classification accuracy of DTW from about 84% to 90%. Our experimental results prove that the proposed LTW is competitive on our data set compared with existed DTW variants and its non-commutative feature is indeed beneficial. We also test a linear version of LTW and it can significantly outperform existed linear runtime lower bound methods like LB_Keogh. Furthermore, with the hybrid algorithm, for the power series classification task we achieve an accuracy up to about 93%. Our research can inspire more studies on time series distance measurement and the hybrid of the deep learning models with other traditional models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 59,789 |
2201.12417 | Why Should I Trust You, Bellman? The Bellman Error is a Poor Replacement
for Value Error | In this work, we study the use of the Bellman equation as a surrogate objective for value prediction accuracy. While the Bellman equation is uniquely solved by the true value function over all state-action pairs, we find that the Bellman error (the difference between both sides of the equation) is a poor proxy for the accuracy of the value function. In particular, we show that (1) due to cancellations from both sides of the Bellman equation, the magnitude of the Bellman error is only weakly related to the distance to the true value function, even when considering all state-action pairs, and (2) in the finite data regime, the Bellman equation can be satisfied exactly by infinitely many suboptimal solutions. This means that the Bellman error can be minimized without improving the accuracy of the value function. We demonstrate these phenomena through a series of propositions, illustrative toy examples, and empirical analysis in standard benchmark domains. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 277,633 |
2006.15406 | Listen carefully and tell: an audio captioning system based on residual
learning and gammatone audio representation | Automated audio captioning is machine listening task whose goal is to describe an audio using free text. An automated audio captioning system has to be implemented as it accepts an audio as input and outputs as textual description, that is, the caption of the signal. This task can be useful in many applications such as automatic content description or machine-to-machine interaction. In this work, an automatic audio captioning based on residual learning on the encoder phase is proposed. The encoder phase is implemented via different Residual Networks configurations. The decoder phase (create the caption) is run using recurrent layers plus attention mechanism. The audio representation chosen has been Gammatone. Results show that the framework proposed in this work surpass the baseline system in challenge results. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 184,492 |
2005.00356 | Understanding the Perceived Quality of Video Predictions | The study of video prediction models is believed to be a fundamental approach to representation learning for videos. While a plethora of generative models for predicting the future frame pixel values given the past few frames exist, the quantitative evaluation of the predicted frames has been found to be extremely challenging. In this context, we study the problem of quality assessment of predicted videos. We create the Indian Institute of Science Predicted Videos Quality Assessment (IISc PVQA) Database consisting of 300 videos, obtained by applying different prediction models on different datasets, and accompanying human opinion scores. We collected subjective ratings of quality from 50 human participants for these videos. Our subjective study reveals that human observers were highly consistent in their judgments of quality of predicted videos. We benchmark several popularly used measures for evaluating video prediction and show that they do not adequately correlate with these subjective scores. We introduce two new features to effectively capture the quality of predicted videos, motion-compensated cosine similarities of deep features of predicted frames with past frames, and deep features extracted from rescaled frame differences. We show that our feature design leads to state of the art quality prediction in accordance with human judgments on our IISc PVQA Database. The database and code are publicly available on our project website: https://nagabhushansn95.github.io/publications/2020/pvqa | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 175,210 |
2304.11473 | (Vector) Space is Not the Final Frontier: Product Search as Program
Synthesis | As ecommerce continues growing, huge investments in ML and NLP for Information Retrieval are following. While the vector space model dominated retrieval modelling in product search - even as vectorization itself greatly changed with the advent of deep learning -, our position paper argues in a contrarian fashion that program synthesis provides significant advantages for many queries and a significant number of players in the market. We detail the industry significance of the proposed approach, sketch implementation details, and address common objections drawing from our experience building a similar system at Tooso. | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | 359,830 |
1706.06282 | Universal Components of Real-world Diffusion Dynamics based on Point
Processes | Bursts in human and natural activities are highly clustered in time, suggesting that these activities are influenced by previous events within the social or natural system. Bursty behavior in the real world conveys information of underlying diffusion processes, which have been the focus of diverse scientific communities from online social media to criminology and epidemiology. However, universal components of real-world diffusion dynamics that cut across disciplines remain unexplored. Here, we introduce a wide range of diffusion processes across disciplines and propose universal components of diffusion frameworks. We apply these components to diffusion-based studies of human disease spread, through a case study of the vector-borne disease dengue. The proposed universality of diffusion can motivate transdisciplinary research and provide a fundamental framework for diffusion models. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 75,656 |
1805.08026 | A Correlation Measure Based on Vector-Valued $L_p$-Norms | In this paper, we introduce a new measure of correlation for bipartite quantum states. This measure depends on a parameter $\alpha$, and is defined in terms of vector-valued $L_p$-norms. The measure is within a constant of the exponential of $\alpha$-R\'enyi mutual information, and reduces to the trace norm (total variation distance) for $\alpha=1$. We will prove some decoupling type theorems in terms of this measure of correlation, and present some applications in privacy amplification as well as in bounding the random coding exponents. In particular, we establish a bound on the secrecy exponent of the wiretap channel (under the total variation metric) in terms of the $\alpha$-R\'enyi mutual information according to \emph{Csisz\'ar's proposal}. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 98,026 |
1611.08484 | Vertex-centred Method to Detect Communities in Evolving Networks | Finding communities in evolving networks is a difficult task and raises issues different from the classic static detection case. We introduce an approach based on the recent vertex-centred paradigm. The proposed algorithm, named DynLOCNeSs, detects communities by scanning and evaluating each vertex neighbourhood, which can be done independently in a parallel way. It is done by means of a preference measure, using these preferences to handle community changes. We also introduce a new vertex neighbourhood preference measure, CWCN, more efficient than current existing ones in the considered context. Experimental results show the relevance of this measure and the ability of the proposed approach to detect classical community evolution patterns such as grow-shrink and merge-split. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 64,512 |
2312.13533 | Automated Clinical Coding for Outpatient Departments | Computerised clinical coding approaches aim to automate the process of assigning a set of codes to medical records. While there is active research pushing the state of the art on clinical coding for hospitalized patients, the outpatient setting -- where doctors tend to non-hospitalised patients -- is overlooked. Although both settings can be formalised as a multi-label classification task, they present unique and distinct challenges, which raises the question of whether the success of inpatient clinical coding approaches translates to the outpatient setting. This paper is the first to investigate how well state-of-the-art deep learning-based clinical coding approaches work in the outpatient setting at hospital scale. To this end, we collect a large outpatient dataset comprising over 7 million notes documenting over half a million patients. We adapt four state-of-the-art clinical coding approaches to this setting and evaluate their potential to assist coders. We find evidence that clinical coding in outpatient settings can benefit from more innovations in popular inpatient coding benchmarks. A deeper analysis of the factors contributing to the success -- amount and form of data and choice of document representation -- reveals the presence of easy-to-solve examples, the coding of which can be completely automated with a low error rate. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 417,341 |
1910.12004 | Model-agnostic Approaches to Handling Noisy Labels When Training Sound
Event Classifiers | Label noise is emerging as a pressing issue in sound event classification. This arises as we move towards larger datasets that are difficult to annotate manually, but it is even more severe if datasets are collected automatically from online repositories, where labels are inferred through automated heuristics applied to the audio content or metadata. While learning from noisy labels has been an active area of research in computer vision, it has received little attention in sound event classification. Most recent computer vision approaches against label noise are relatively complex, requiring complex networks or extra data resources. In this work, we evaluate simple and efficient model-agnostic approaches to handling noisy labels when training sound event classifiers, namely label smoothing regularization, mixup and noise-robust loss functions. The main advantage of these methods is that they can be easily incorporated to existing deep learning pipelines without need for network modifications or extra resources. We report results from experiments conducted with the FSDnoisy18k dataset. We show that these simple methods can be effective in mitigating the effect of label noise, providing up to 2.5\% of accuracy boost when incorporated to two different CNNs, while requiring minimal intervention and computational overhead. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 150,933 |
2411.01292 | Causal reasoning in difference graphs | Understanding causal mechanisms across different populations is essential for designing effective public health interventions. Recently, difference graphs have been introduced as a tool to visually represent causal variations between two distinct populations. While there has been progress in inferring these graphs from data through causal discovery methods, there remains a gap in systematically leveraging their potential to enhance causal reasoning. This paper addresses that gap by establishing conditions for identifying causal changes and effects using difference graphs. It specifically focuses on identifying total causal changes and total effects in a nonparametric setting, as well as direct causal changes and direct effects in a linear setting. In doing so, it provides a novel approach to causal reasoning that holds potential for various public health applications. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 505,002 |
2107.00862 | User Role Discovery and Optimization Method based on K-means +
Reinforcement learning in Mobile Applications | With the widespread use of mobile phones, users can share their location and activity anytime, anywhere, as a form of check in data. These data reflect user features. Long term stable, and a set of user shared features can be abstracted as user roles. The role is closely related to the user's social background, occupation, and living habits. This study provides four main contributions. Firstly, user feature models from different views for each user are constructed from the analysis of check in data. Secondly, K Means algorithm is used to discover user roles from user features. Thirdly, a reinforcement learning algorithm is proposed to strengthen the clustering effect of user roles and improve the stability of the clustering result. Finally, experiments are used to verify the validity of the method, the results of which show the effectiveness of the method. | true | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 244,300 |
0712.1345 | Sequential operators in computability logic | Computability logic (CL) (see http://www.cis.upenn.edu/~giorgi/cl.html) is a semantical platform and research program for redeveloping logic as a formal theory of computability, as opposed to the formal theory of truth which it has more traditionally been. Formulas in CL stand for (interactive) computational problems, understood as games between a machine and its environment; logical operators represent operations on such entities; and "truth" is understood as existence of an effective solution, i.e., of an algorithmic winning strategy. The formalism of CL is open-ended, and may undergo series of extensions as the study of the subject advances. The main groups of operators on which CL has been focused so far are the parallel, choice, branching, and blind operators. The present paper introduces a new important group of operators, called sequential. The latter come in the form of sequential conjunction and disjunction, sequential quantifiers, and sequential recurrences. As the name may suggest, the algorithmic intuitions associated with this group are those of sequential computations, as opposed to the intuitions of parallel computations associated with the parallel group of operations: playing a sequential combination of games means playing its components in a sequential fashion, one after one. The main technical result of the present paper is a sound and complete axiomatization of the propositional fragment of computability logic whose vocabulary, together with negation, includes all three -- parallel, choice and sequential -- sorts of conjunction and disjunction. An extension of this result to the first-order level is also outlined. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 1,010 |
2207.00431 | Stain Isolation-based Guidance for Improved Stain Translation | Unsupervised and unpaired domain translation using generative adversarial neural networks, and more precisely CycleGAN, is state of the art for the stain translation of histopathology images. It often, however, suffers from the presence of cycle-consistent but non structure-preserving errors. We propose an alternative approach to the set of methods which, relying on segmentation consistency, enable the preservation of pathology structures. Focusing on immunohistochemistry (IHC) and multiplexed immunofluorescence (mIF), we introduce a simple yet effective guidance scheme as a loss function that leverages the consistency of stain translation with stain isolation. Qualitative and quantitative experiments show the ability of the proposed approach to improve translation between the two domains. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 305,758 |
2302.00906 | New Constructions of Optimal Binary LCD Codes | Linear complementary dual (LCD) codes can provide an optimum linear coding solution for the two-user binary adder channel. LCD codes also can be used to against side-channel attacks and fault non-invasive attacks. Let $d_{LCD}(n, k)$ denote the maximum value of $d$ for which a binary $[n,k, d]$ LCD code exists. In \cite{BS21}, Bouyuklieva conjectured that $d_{LCD}(n+1, k)=d_{LCD}(n, k)$ or $d_{LCD}(n, k) + 1$ for any lenth $n$ and dimension $k \ge 2$. In this paper, we first prove Bouyuklieva's conjecture \cite{BS21} by constructing a binary $[n,k,d-1]$ LCD codes from a binary $[n+1,k,d]$ $LCD_{o,e}$ code, when $d \ge 3$ and $k \ge 2$. Then we provide a distance lower bound for binary LCD codes by expanded codes, and use this bound and some methods such as puncturing, shortening, expanding and extension, we construct some new binary LCD codes. Finally, we improve some previously known values of $d_{LCD}(n, k)$ of lengths $38 \le n \le 40$ and dimensions $9 \le k \le 15$. We also obtain some values of $d_{LCD}(n, k)$ with $41 \le n \le 50$ and $6 \le k \le n-6$. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 343,401 |
2004.11075 | Fast Convex Relaxations using Graph Discretizations | Matching and partitioning problems are fundamentals of computer vision applications with examples in multilabel segmentation, stereo estimation and optical-flow computation. These tasks can be posed as non-convex energy minimization problems and solved near-globally optimal by recent convex lifting approaches. Yet, applying these techniques comes with a significant computational effort, reducing their feasibility in practical applications. We discuss spatial discretization of continuous partitioning problems into a graph structure, generalizing discretization onto a Cartesian grid. This setup allows us to faithfully work on super-pixel graphs constructed by SLIC or Cut-Pursuit, massively decreasing the computational effort for lifted partitioning problems compared to a Cartesian grid, while optimal energy values remain similar: The global matching is still solved near-globally optimal. We discuss this methodology in detail and show examples in multi-label segmentation by minimal partitions and stereo estimation, where we demonstrate that the proposed graph discretization can reduce runtime as well as memory consumption of convex relaxations of matching problems by up to a factor of 10. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 173,806 |
2203.05564 | HDL: Hybrid Deep Learning for the Synthesis of Myocardial Velocity Maps
in Digital Twins for Cardiac Analysis | Synthetic digital twins based on medical data accelerate the acquisition, labelling and decision making procedure in digital healthcare. A core part of digital healthcare twins is model-based data synthesis, which permits the generation of realistic medical signals without requiring to cope with the modelling complexity of anatomical and biochemical phenomena producing them in reality. Unfortunately, algorithms for cardiac data synthesis have been so far scarcely studied in the literature. An important imaging modality in the cardiac examination is three-directional CINE multi-slice myocardial velocity mapping (3Dir MVM), which provides a quantitative assessment of cardiac motion in three orthogonal directions of the left ventricle. The long acquisition time and complex acquisition produce make it more urgent to produce synthetic digital twins of this imaging modality. In this study, we propose a hybrid deep learning (HDL) network, especially for synthetic 3Dir MVM data. Our algorithm is featured by a hybrid UNet and a Generative Adversarial Network with a foreground-background generation scheme. The experimental results show that from temporally down-sampled magnitude CINE images (six times), our proposed algorithm can still successfully synthesise high temporal resolution 3Dir MVM CMR data (PSNR=42.32) with precise left ventricle segmentation (DICE=0.92). These performance scores indicate that our proposed HDL algorithm can be implemented in real-world digital twins for myocardial velocity mapping data simulation. To the best of our knowledge, this work is the first one in the literature investigating digital twins of the 3Dir MVM CMR, which has shown great potential for improving the efficiency of clinical studies via synthesised cardiac data. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 284,839 |
2412.20946 | Generalizing in Net-Zero Microgrids: A Study with Federated PPO and TRPO | This work addresses the challenge of optimal energy management in microgrids through a collaborative and privacy-preserving framework. We propose the FedTRPO methodology, which integrates Federated Learning (FL) and Trust Region Policy Optimization (TRPO) to manage distributed energy resources (DERs) efficiently. Using a customized version of the CityLearn environment and synthetically generated data, we simulate designed net-zero energy scenarios for microgrids composed of multiple buildings. Our approach emphasizes reducing energy costs and carbon emissions while ensuring privacy. Experimental results demonstrate that FedTRPO is comparable with state-of-the-art federated RL methodologies without hyperparameter tunning. The proposed framework highlights the feasibility of collaborative learning for achieving optimal control policies in energy systems, advancing the goals of sustainable and efficient smart grids. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 521,410 |
1412.2773 | Cooperative Change Detection for Online Power Quality Monitoring | This paper considers the real-time power quality monitoring in power grid systems. The goal is to detect the occurrence of disturbances in the nominal sinusoidal voltage/current signal as quickly as possible such that protection measures can be taken in time. Based on an autoregressive (AR) model for the disturbance, we propose a generalized local likelihood ratio (GLLR) detector which processes meter readings sequentially and alarms as soon as the test statistic exceeds a prescribed threshold. The proposed detector not only reacts to a wide range of disturbances, but also achieves lower detection delay compared to the conventional block processing method. Then we further propose to deploy multiple meters to monitor the power signal cooperatively. The distributed meters communicate wirelessly to a central meter, where the data fusion and detection are performed. In light of the limited bandwidth of wireless channels, we develop a level-triggered sampling scheme, where each meter transmits only one-bit each time asynchronously. The proposed multi-meter scheme features substantially low communication overhead, while its performance is close to that of the ideal case where distributed meter readings are perfectly available at the central meter. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 38,233 |
1708.07903 | Nationality Classification Using Name Embeddings | Nationality identification unlocks important demographic information, with many applications in biomedical and sociological research. Existing name-based nationality classifiers use name substrings as features and are trained on small, unrepresentative sets of labeled names, typically extracted from Wikipedia. As a result, these methods achieve limited performance and cannot support fine-grained classification. We exploit the phenomena of homophily in communication patterns to learn name embeddings, a new representation that encodes gender, ethnicity, and nationality which is readily applicable to building classifiers and other systems. Through our analysis of 57M contact lists from a major Internet company, we are able to design a fine-grained nationality classifier covering 39 groups representing over 90% of the world population. In an evaluation against other published systems over 13 common classes, our F1 score (0.795) is substantial better than our closest competitor Ethnea (0.580). To the best of our knowledge, this is the most accurate, fine-grained nationality classifier available. As a social media application, we apply our classifiers to the followers of major Twitter celebrities over six different domains. We demonstrate stark differences in the ethnicities of the followers of Trump and Obama, and in the sports and entertainments favored by different groups. Finally, we identify an anomalous political figure whose presumably inflated following appears largely incapable of reading the language he posts in. | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 79,552 |
2104.09804 | SE-SSD: Self-Ensembling Single-Stage Object Detector From Point Cloud | We present Self-Ensembling Single-Stage object Detector (SE-SSD) for accurate and efficient 3D object detection in outdoor point clouds. Our key focus is on exploiting both soft and hard targets with our formulated constraints to jointly optimize the model, without introducing extra computation in the inference. Specifically, SE-SSD contains a pair of teacher and student SSDs, in which we design an effective IoU-based matching strategy to filter soft targets from the teacher and formulate a consistency loss to align student predictions with them. Also, to maximize the distilled knowledge for ensembling the teacher, we design a new augmentation scheme to produce shape-aware augmented samples to train the student, aiming to encourage it to infer complete object shapes. Lastly, to better exploit hard targets, we design an ODIoU loss to supervise the student with constraints on the predicted box centers and orientations. Our SE-SSD attains top performance compared with all prior published works. Also, it attains top precisions for car detection in the KITTI benchmark (ranked 1st and 2nd on the BEV and 3D leaderboards, respectively) with an ultra-high inference speed. The code is available at https://github.com/Vegeta2020/SE-SSD. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 231,362 |
1210.5502 | OpenCFU, a New Free and Open-Source Software to Count Cell Colonies and
Other Circular Objects | Counting circular objects such as cell colonies is an important source of information for biologists. Although this task is often time-consuming and subjective, it is still predominantly performed manually. The aim of the present work is to provide a new tool to enumerate circular objects from digital pictures and video streams. Here, I demonstrate that the created program, OpenCFU, is very robust, accurate and fast. In addition, it provides control over the processing parameters and is implemented in an in- tuitive and modern interface. OpenCFU is a cross-platform and open-source software freely available at http://opencfu.sourceforge.net. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 19,283 |
1502.07310 | Pantheon 1.0, a manually verified dataset of globally famous biographies | We present the Pantheon 1.0 dataset: a manually verified dataset of individuals that have transcended linguistic, temporal, and geographic boundaries. The Pantheon 1.0 dataset includes the 11,341 biographies present in more than 25 languages in Wikipedia and is enriched with: (i) manually verified demographic information (place and date of birth, gender) (ii) a taxonomy of occupations classifying each biography at three levels of aggregation and (iii) two measures of global popularity including the number of languages in which a biography is present in Wikipedia (L), and the Historical Popularity Index (HPI) a metric that combines information on L, time since birth, and page-views (2008-2013). We compare the Pantheon 1.0 dataset to data from the 2003 book, Human Accomplishments, and also to external measures of accomplishment in individual games and sports: Tennis, Swimming, Car Racing, and Chess. In all of these cases we find that measures of popularity (L and HPI) correlate highly with individual accomplishment, suggesting that measures of global popularity proxy the historical impact of individuals. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 40,565 |
2102.01161 | Adjoint Rigid Transform Network: Task-conditioned Alignment of 3D Shapes | Most learning methods for 3D data (point clouds, meshes) suffer significant performance drops when the data is not carefully aligned to a canonical orientation. Aligning real world 3D data collected from different sources is non-trivial and requires manual intervention. In this paper, we propose the Adjoint Rigid Transform (ART) Network, a neural module which can be integrated with a variety of 3D networks to significantly boost their performance. ART learns to rotate input shapes to a learned canonical orientation, which is crucial for a lot of tasks such as shape reconstruction, interpolation, non-rigid registration, and latent disentanglement. ART achieves this with self-supervision and a rotation equivariance constraint on predicted rotations. The remarkable result is that with only self-supervision, ART facilitates learning a unique canonical orientation for both rigid and nonrigid shapes, which leads to a notable boost in performance of aforementioned tasks. We will release our code and pre-trained models for further research. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 218,008 |
2409.15721 | Applying Incremental Learning in Binary-Addition-Tree Algorithm for
Dynamic Binary-State Network Reliability | This paper presents a novel approach to enhance the Binary-Addition-Tree algorithm (BAT) by integrating incremental learning techniques. BAT, known for its simplicity in development, implementation, and application, is a powerful implicit enumeration method for solving network reliability and optimization problems. However, it traditionally struggles with dynamic and large-scale networks due to its static nature. By introducing incremental learning, we enable the BAT to adapt and improve its performance iteratively as it encounters new data or network changes. This integration allows for more efficient computation, reduced redundancy without searching minimal paths and cuts, and improves overall performance in dynamic environments. Experimental results demonstrate the effectiveness of the proposed method, showing significant improvements in both computational efficiency and solution quality compared to the traditional BAT and indirect algorithms, such as MP-based algorithms and MC-based algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 491,029 |
2408.11535 | SAM-REF: Rethinking Image-Prompt Synergy for Refinement in Segment
Anything | The advent of the Segment Anything Model (SAM) marks a significant milestone for interactive segmentation using generalist models. As a late fusion model, SAM extracts image embeddings once and merges them with prompts in later interactions. This strategy limits the models ability to extract detailed information from the prompted target zone. Current specialist models utilize the early fusion strategy that encodes the combination of images and prompts to target the prompted objects, yet repetitive complex computations on the images result in high latency. The key to these issues is efficiently synergizing the images and prompts. We propose SAM-REF, a two-stage refinement framework that fully integrates images and prompts globally and locally while maintaining the accuracy of early fusion and the efficiency of late fusion. The first-stage GlobalDiff Refiner is a lightweight early fusion network that combines the whole image and prompts, focusing on capturing detailed information for the entire object. The second-stage PatchDiff Refiner locates the object detail window according to the mask and prompts, then refines the local details of the object. Experimentally, we demonstrated the high effectiveness and efficiency of our method in tackling complex cases with multiple interactions. Our SAM-REF model outperforms the current state-of-the-art method in most metrics on segmentation quality without compromising efficiency. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 482,333 |
2011.08071 | JNLP Team: Deep Learning for Legal Processing in COLIEE 2020 | We propose deep learning based methods for automatic systems of legal retrieval and legal question-answering in COLIEE 2020. These systems are all characterized by being pre-trained on large amounts of data before being finetuned for the specified tasks. This approach helps to overcome the data scarcity and achieve good performance, thus can be useful for tackling related problems in information retrieval, and decision support in the legal domain. Besides, the approach can be explored to deal with other domain specific problems. | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | 206,769 |
1406.6322 | Stylized facts in Brazilian vote distributions | Elections, specially in countries such as Brazil with an electorate of the order of 100 million people, yield large-scale data-sets embodying valuable information on the dynamics through which individuals influence each other and make choices. In this work we perform an extensive analysis of data sets available for Brazilian proportional elections of legislators and city councillors throughout the period 1970-2012, which embraces two distinct political regimes: a military dictatorship and a democratic phase. Through the distribution $P(v)$ of the number of candidates receiving $v$ votes, we perform a comparative analysis of different elections in the same calendar and as a function of time. The distributions $P(v)$ present a scale-free regime with a power-law exponent $\alpha$ which is not universal and appears to be characteristic of the electorate. Moreover, we observe that $\alpha$ typically increases with time. We propose a multi-species model consisting in a system of nonlinear differential equations with stochastic parameters that allows to understand the empirical observations. We conclude that the power-law exponent $\alpha$ constitutes a measure of the degree of feedback of the electorate interactions. To know the interactivity of the population is relevant beyond the context of elections, since a similar feedback may occur in other social contagion processes. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 34,111 |
2410.14833 | A novel approach towards the classification of Bone Fracture from
Musculoskeletal Radiography images using Attention Based Transfer Learning | Computer-aided diagnosis (CAD) is today considered a vital tool in the field of biological image categorization, segmentation, and other related tasks. The current breakthrough in computer vision algorithms and deep learning approaches has substantially enhanced the effectiveness and precision of apps built to recognize and locate regions of interest inside medical photographs. Among the different disciplines of medical image analysis, bone fracture detection, and classification have exhibited exceptional potential. Although numerous imaging modalities are applied in medical diagnostics, X-rays are particularly significant in this sector due to their broad availability, ease of use, and extensive information extraction capabilities. This research studies bone fracture categorization using the FracAtlas dataset, which comprises 4,083 musculoskeletal radiography pictures. Given the transformational development in transfer learning, particularly its efficacy in medical image processing, we deploy an attention-based transfer learning model to detect bone fractures in X-ray scans. Though the popular InceptionV3 and DenseNet121 deep learning models have been widely used, they still have the potential to be employed in crucial jobs. In this research, alongside transfer learning, a separate attention mechanism is also applied to boost the capabilities of transfer learning techniques. Through rigorous optimization, our model achieves a state-of-the-art accuracy of more than 90\% in fracture classification. This work contributes to the expanding corpus of research focused on the application of transfer learning to medical imaging, notably in the context of X-ray processing, and emphasizes the promise for additional exploration in this domain. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 500,229 |
1306.0158 | Virality Prediction and Community Structure in Social Networks | How does network structure affect diffusion? Recent studies suggest that the answer depends on the type of contagion. Complex contagions, unlike infectious diseases (simple contagions), are affected by social reinforcement and homophily. Hence, the spread within highly clustered communities is enhanced, while diffusion across communities is hampered. A common hypothesis is that memes and behaviors are complex contagions. We show that, while most memes indeed behave like complex contagions, a few viral memes spread across many communities, like diseases. We demonstrate that the future popularity of a meme can be predicted by quantifying its early spreading pattern in terms of community concentration. The more communities a meme permeates, the more viral it is. We present a practical method to translate data about community structure into predictive knowledge about what information will spread widely. This connection may lead to significant advances in computational social science, social media analytics, and marketing applications. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 24,935 |
2310.11398 | Neural Attention: Enhancing QKV Calculation in Self-Attention Mechanism
with Neural Networks | In the realm of deep learning, the self-attention mechanism has substantiated its pivotal role across a myriad of tasks, encompassing natural language processing and computer vision. Despite achieving success across diverse applications, the traditional self-attention mechanism primarily leverages linear transformations for the computation of query, key, and value (QKV), which may not invariably be the optimal choice under specific circumstances. This paper probes into a novel methodology for QKV computation-implementing a specially-designed neural network structure for the calculation. Utilizing a modified Marian model, we conducted experiments on the IWSLT 2017 German-English translation task dataset and juxtaposed our method with the conventional approach. The experimental results unveil a significant enhancement in BLEU scores with our method. Furthermore, our approach also manifested superiority when training the Roberta model with the Wikitext-103 dataset, reflecting a notable reduction in model perplexity compared to its original counterpart. These experimental outcomes not only validate the efficacy of our method but also reveal the immense potential in optimizing the self-attention mechanism through neural network-based QKV computation, paving the way for future research and practical applications. The source code and implementation details for our proposed method can be accessed at https://github.com/ocislyjrti/NeuralAttention. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 400,623 |
2103.00092 | The Age of Correlated Features in Supervised Learning based Forecasting | In this paper, we analyze the impact of information freshness on supervised learning based forecasting. In these applications, a neural network is trained to predict a time-varying target (e.g., solar power), based on multiple correlated features (e.g., temperature, humidity, and cloud coverage). The features are collected from different data sources and are subject to heterogeneous and time-varying ages. By using an information-theoretic approach, we prove that the minimum training loss is a function of the ages of the features, where the function is not always monotonic. However, if the empirical distribution of the training data is close to the distribution of a Markov chain, then the training loss is approximately a non-decreasing age function. Both the training loss and testing loss depict similar growth patterns as the age increases. An experiment on solar power prediction is conducted to validate our theory. Our theoretical and experimental results suggest that it is beneficial to (i) combine the training data with different age values into a large training dataset and jointly train the forecasting decisions for these age values, and (ii) feed the age value as a part of the input feature to the neural network. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 222,143 |
2310.01156 | Neural Fiber Activation in Unipolar vs Bipolar Deep Brain Stimulation | Deep Brain Stimulation (DBS) is an established and powerful treatment method in various neurological disorders. It involves chronically delivering electrical pulses to a certain stimulation target in the brain in order to alleviate the symptoms of a disease. Traditionally, the effect of DBS on neural tissue has been modeled based on the geometrical intersection of the static Volume of Tissue Activated (VTA) and the stimulation target. Recent studies suggest that the Dentato-Rubro-Thalamic Tract (DRTT) may serve as a potential common underlying stimulation target for tremor control in Essential Tremor (ET). However, clinical observations highlight that the therapeutic effect of DBS, especially in ET, is strongly influenced by the dynamic DBS parameters such as pulse width and frequency, as well as stimulation polarity. This study introduces a computational model to elucidate the effect of the stimulation signal shape on the DRTT under neural input. The simulation results suggest that achieving a specific pulse amplitude threshold is necessary before eliciting the therapeutic effect through adjustments in pulse widths and frequencies becomes feasible. Longer pulse widths proved more likely to induce firing, thus requiring a lower stimulation amplitude. Additionally, the modulation effect of bipolar configurations on neural traffic was found to vary significantly depending on the chosen stimulation polarity and the direction of neural traffic. Further, bipolar configurations demonstrated the ability to selectively influence firing patterns in different fiber tracts. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 396,302 |
2208.05621 | ARMANI: Part-level Garment-Text Alignment for Unified Cross-Modal
Fashion Design | Cross-modal fashion image synthesis has emerged as one of the most promising directions in the generation domain due to the vast untapped potential of incorporating multiple modalities and the wide range of fashion image applications. To facilitate accurate generation, cross-modal synthesis methods typically rely on Contrastive Language-Image Pre-training (CLIP) to align textual and garment information. In this work, we argue that simply aligning texture and garment information is not sufficient to capture the semantics of the visual information and therefore propose MaskCLIP. MaskCLIP decomposes the garments into semantic parts, ensuring fine-grained and semantically accurate alignment between the visual and text information. Building on MaskCLIP, we propose ARMANI, a unified cross-modal fashion designer with part-level garment-text alignment. ARMANI discretizes an image into uniform tokens based on a learned cross-modal codebook in its first stage and uses a Transformer to model the distribution of image tokens for a real image given the tokens of the control signals in its second stage. Contrary to prior approaches that also rely on two-stage paradigms, ARMANI introduces textual tokens into the codebook, making it possible for the model to utilize fine-grain semantic information to generate more realistic images. Further, by introducing a cross-modal Transformer, ARMANI is versatile and can accomplish image synthesis from various control signals, such as pure text, sketch images, and partial images. Extensive experiments conducted on our newly collected cross-modal fashion dataset demonstrate that ARMANI generates photo-realistic images in diverse synthesis tasks and outperforms existing state-of-the-art cross-modal image synthesis approaches.Our code is available at https://github.com/Harvey594/ARMANI. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 312,444 |
2401.12970 | Raidar: geneRative AI Detection viA Rewriting | We find that large language models (LLMs) are more likely to modify human-written text than AI-generated text when tasked with rewriting. This tendency arises because LLMs often perceive AI-generated text as high-quality, leading to fewer modifications. We introduce a method to detect AI-generated content by prompting LLMs to rewrite text and calculating the editing distance of the output. We dubbed our geneRative AI Detection viA Rewriting method Raidar. Raidar significantly improves the F1 detection scores of existing AI content detection models -- both academic and commercial -- across various domains, including News, creative writing, student essays, code, Yelp reviews, and arXiv papers, with gains of up to 29 points. Operating solely on word symbols without high-dimensional features, our method is compatible with black box LLMs, and is inherently robust on new content. Our results illustrate the unique imprint of machine-generated text through the lens of the machines themselves. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 423,556 |
1608.01212 | A Novel Approach for Data-Driven Automatic Site Recommendation and
Selection | This paper presents a novel, generic, and automatic method for data-driven site selection. Site selection is one of the most crucial and important decisions made by any company. Such a decision depends on various factors of sites, including socio-economic, geographical, ecological, as well as specific requirements of companies. The existing approaches for site selection (commonly used by economists) are manual, subjective, and not scalable, especially to Big Data. The presented method for site selection is robust, efficient, scalable, and is capable of handling challenges emerging in Big Data. To assess the effectiveness of the presented method, it is evaluated on real data (collected from Federal Statistical Office of Germany) of around 200 influencing factors which are considered by economists for site selection of Supermarkets in Germany (Lidl, EDEKA, and NP). Evaluation results show that there is a big overlap (86.4 \%) between the sites of existing supermarkets and the sites recommended by the presented method. In addition, the method also recommends many sites (328) for supermarket where a store should be opened. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 59,395 |
2106.09958 | Novelty Detection via Contrastive Learning with Negative Data
Augmentation | Novelty detection is the process of determining whether a query example differs from the learned training distribution. Previous methods attempt to learn the representation of the normal samples via generative adversarial networks (GANs). However, they will suffer from instability training, mode dropping, and low discriminative ability. Recently, various pretext tasks (e.g. rotation prediction and clustering) have been proposed for self-supervised learning in novelty detection. However, the learned latent features are still low discriminative. We overcome such problems by introducing a novel decoder-encoder framework. Firstly, a generative network (a.k.a. decoder) learns the representation by mapping the initialized latent vector to an image. In particular, this vector is initialized by considering the entire distribution of training data to avoid the problem of mode-dropping. Secondly, a contrastive network (a.k.a. encoder) aims to ``learn to compare'' through mutual information estimation, which directly helps the generative network to obtain a more discriminative representation by using a negative data augmentation strategy. Extensive experiments show that our model has significant superiority over cutting-edge novelty detectors and achieves new state-of-the-art results on some novelty detection benchmarks, e.g. CIFAR10 and DCASE. Moreover, our model is more stable for training in a non-adversarial manner, compared to other adversarial based novelty detection methods. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 241,851 |
2112.10273 | Design of a synthetic integral feedback circuit: dynamic analysis and
DNA implementation | The design and implementation of regulation motifs ensuring robust perfect adaptation are challenging problems in synthetic biology. Indeed, the design of high-yield robust metabolic pathways producing, for instance, drug precursors and biofuels, could be easily imagined to rely on such a control strategy in order to optimize production levels and reduce production costs, despite the presence of environmental disturbance and model uncertainty. We propose here a motif that ensures tracking and robust perfect adaptation for the controlled reaction network through integral feedback. Its metabolic load on the host is fully tunable and can be made arbitrarily close to the constitutive limit, the universal minimal metabolic load of all possible controllers. A DNA implementation of the controller network is finally provided. Computer simulations using realistic parameters demonstrate the good agreement between the DNA implementation and the ideal controller dynamics. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 272,390 |
2009.08816 | Improved Coding over Sets for DNA-Based Data Storage | Error-correcting codes over sets, with applications to DNA storage, are studied. The DNA-storage channel receives a set of sequences, and produces a corrupted version of the set, including sequence loss, symbol substitution, symbol insertion/deletion, and limited-magnitude errors in symbols. Various parameter regimes are studied. New bounds on code parameters are provided, which improve upon known bounds. New codes are constructed, at times matching the bounds up to lower-or der terms or small constant factors. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 196,354 |
2102.05954 | Demarcating Endogenous and Exogenous Opinion Dynamics: An Experimental
Design Approach | The networked opinion diffusion in online social networks (OSN) is often governed by the two genres of opinions - endogenous opinions that are driven by the influence of social contacts among users, and exogenous opinions which are formed by external effects like news, feeds etc. Accurate demarcation of endogenous and exogenous messages offers an important cue to opinion modeling, thereby enhancing its predictive performance. In this paper, we design a suite of unsupervised classification methods based on experimental design approaches, in which, we aim to select the subsets of events which minimize different measures of mean estimation error. In more detail, we first show that these subset selection tasks are NP-Hard. Then we show that the associated objective functions are weakly submodular, which allows us to cast efficient approximation algorithms with guarantees. Finally, we validate the efficacy of our proposal on various real-world datasets crawled from Twitter as well as diverse synthetic datasets. Our experiments range from validating prediction performance on unsanitized and sanitized events to checking the effect of selecting optimal subsets of various sizes. Through various experiments, we have found that our method offers a significant improvement in accuracy in terms of opinion forecasting, against several competitors. | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 219,589 |
1812.00099 | Understanding Unequal Gender Classification Accuracy from Face Images | Recent work shows unequal performance of commercial face classification services in the gender classification task across intersectional groups defined by skin type and gender. Accuracy on dark-skinned females is significantly worse than on any other group. In this paper, we conduct several analyses to try to uncover the reason for this gap. The main finding, perhaps surprisingly, is that skin type is not the driver. This conclusion is reached via stability experiments that vary an image's skin type via color-theoretic methods, namely luminance mode-shift and optimal transport. A second suspect, hair length, is also shown not to be the driver via experiments on face images cropped to exclude the hair. Finally, using contrastive post-hoc explanation techniques for neural networks, we bring forth evidence suggesting that differences in lip, eye and cheek structure across ethnicity lead to the differences. Further, lip and eye makeup are seen as strong predictors for a female face, which is a troubling propagation of a gender stereotype. | false | false | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | 115,158 |
2404.18612 | Enhancing Prosthetic Safety and Environmental Adaptability: A
Visual-Inertial Prosthesis Motion Estimation Approach on Uneven Terrains | Environment awareness is crucial for enhancing walking safety and stability of amputee wearing powered prosthesis when crossing uneven terrains such as stairs and obstacles. However, existing environmental perception systems for prosthesis only provide terrain types and corresponding parameters, which fails to prevent potential collisions when crossing uneven terrains and may lead to falls and other severe consequences. In this paper, a visual-inertial motion estimation approach is proposed for prosthesis to perceive its movement and the changes of spatial relationship between the prosthesis and uneven terrain when traversing them. To achieve this, we estimate the knee motion by utilizing a depth camera to perceive the environment and align feature points extracted from stairs and obstacles. Subsequently, an error-state Kalman filter is incorporated to fuse the inertial data into visual estimations to reduce the feature extraction error and obtain a more robust estimation. The motion of prosthetic joint and toe are derived using the prosthesis model parameters. Experiment conducted on our collected dataset and stair walking trials with a powered prosthesis shows that the proposed method can accurately tracking the motion of the human leg and prosthesis with an average root-mean-square error of toe trajectory less than 5 cm. The proposed method is expected to enable the environmental adaptive control for prosthesis, thereby enhancing amputee's safety and mobility in uneven terrains. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 450,330 |
1804.02555 | Drive Video Analysis for the Detection of Traffic Near-Miss Incidents | Because of their recent introduction, self-driving cars and advanced driver assistance system (ADAS) equipped vehicles have had little opportunity to learn, the dangerous traffic (including near-miss incident) scenarios that provide normal drivers with strong motivation to drive safely. Accordingly, as a means of providing learning depth, this paper presents a novel traffic database that contains information on a large number of traffic near-miss incidents that were obtained by mounting driving recorders in more than 100 taxis over the course of a decade. The study makes the following two main contributions: (i) In order to assist automated systems in detecting near-miss incidents based on database instances, we created a large-scale traffic near-miss incident database (NIDB) that consists of video clip of dangerous events captured by monocular driving recorders. (ii) To illustrate the applicability of NIDB traffic near-miss incidents, we provide two primary database-related improvements: parameter fine-tuning using various near-miss scenes from NIDB, and foreground/background separation into motion representation. Then, using our new database in conjunction with a monocular driving recorder, we developed a near-miss recognition method that provides automated systems with a performance level that is comparable to a human-level understanding of near-miss incidents (64.5% vs. 68.4% at near-miss recognition, 61.3% vs. 78.7% at near-miss detection). | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 94,430 |
2304.02396 | AutoRL Hyperparameter Landscapes | Although Reinforcement Learning (RL) has shown to be capable of producing impressive results, its use is limited by the impact of its hyperparameters on performance. This often makes it difficult to achieve good results in practice. Automated RL (AutoRL) addresses this difficulty, yet little is known about the dynamics of the hyperparameter landscapes that hyperparameter optimization (HPO) methods traverse in search of optimal configurations. In view of existing AutoRL approaches dynamically adjusting hyperparameter configurations, we propose an approach to build and analyze these hyperparameter landscapes not just for one point in time but at multiple points in time throughout training. Addressing an important open question on the legitimacy of such dynamic AutoRL approaches, we provide thorough empirical evidence that the hyperparameter landscapes strongly vary over time across representative algorithms from RL literature (DQN, PPO, and SAC) in different kinds of environments (Cartpole, Bipedal Walker, and Hopper) This supports the theory that hyperparameters should be dynamically adjusted during training and shows the potential for more insights on AutoRL problems that can be gained through landscape analyses. Our code can be found at https://github.com/automl/AutoRL-Landscape | false | false | false | false | true | false | true | true | false | false | true | false | false | false | false | false | false | false | 356,423 |
2312.06088 | SECNN: Squeeze-and-Excitation Convolutional Neural Network for Sentence
Classification | Sentence classification is one of the basic tasks of natural language processing. Convolution neural network (CNN) has the ability to extract n-grams features through convolutional filters and capture local correlations between consecutive words in parallel, so CNN is a popular neural network architecture to dealing with the task. But restricted by the width of convolutional filters, it is difficult for CNN to capture long term contextual dependencies. Attention is a mechanism that considers global information and pays more attention to keywords in sentences, thus attention mechanism is cooperated with CNN network to improve performance in sentence classification task. In our work, we don't focus on keyword in a sentence, but on which CNN's output feature map is more important. We propose a Squeeze-and-Excitation Convolutional neural Network (SECNN) for sentence classification. SECNN takes the feature maps from multiple CNN as different channels of sentence representation, and then, we can utilize channel attention mechanism, that is SE attention mechanism, to enable the model to learn the attention weights of different channel features. The results show that our model achieves advanced performance in the sentence classification task. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 414,364 |
1904.03604 | BriskStream: Scaling Data Stream Processing on Shared-Memory Multicore
Architectures | We introduce BriskStream, an in-memory data stream processing system (DSPSs) specifically designed for modern shared-memory multicore architectures. BriskStream's key contribution is an execution plan optimization paradigm, namely RLAS, which takes relative-location (i.e., NUMA distance) of each pair of producer-consumer operators into consideration. We propose a branch and bound based approach with three heuristics to resolve the resulting nontrivial optimization problem. The experimental evaluations demonstrate that BriskStream yields much higher throughput and better scalability than existing DSPSs on multi-core architectures when processing different types of workloads. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 126,788 |
1207.2000 | Hycon2 Benchmark: Power Network System | As a benchmark exercise for testing software and methods developed in Hycon2 for decentralized and distributed control, we address the problem of designing the Automatic Generation Control (AGC) layer in power network systems. In particular, we present three different scenarios and discuss performance levels that can be reached using Centralized Model Predictive Control (MPC). These results can be used as a milestone for comparing the performance of alternative control schemes. Matlab software for simulating the scenarios is also provided in an accompanying file. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 17,352 |
1907.09732 | Variational Registration of Multiple Images with the SVD based SqN
Distance Measure | Image registration, especially the quantification of image similarity, is an important task in image processing. Various approaches for the comparison of two images are discussed in the literature. However, although most of these approaches perform very well in a two image scenario, an extension to a multiple images scenario deserves attention. In this article, we discuss and compare registration methods for multiple images. Our key assumption is, that information about the singular values of a feature matrix of images can be used for alignment. We introduce, discuss and relate three recent approaches from the literature: the Schatten q-norm based SqN distance measure, a rank based approach, and a feature volume based approach. We also present results for typical applications such as dynamic image sequences or stacks of histological sections. Our results indicate that the SqN approach is in fact a suitable distance measure for image registration. Moreover, our examples also indicate that the results obtained by SqN are superior to those obtained by its competitors. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 139,461 |
1411.5417 | Private Empirical Risk Minimization Beyond the Worst Case: The Effect of
the Constraint Set Geometry | Empirical Risk Minimization (ERM) is a standard technique in machine learning, where a model is selected by minimizing a loss function over constraint set. When the training dataset consists of private information, it is natural to use a differentially private ERM algorithm, and this problem has been the subject of a long line of work started with Chaudhuri and Monteleoni 2008. A private ERM algorithm outputs an approximate minimizer of the loss function and its error can be measured as the difference from the optimal value of the loss function. When the constraint set is arbitrary, the required error bounds are fairly well understood \cite{BassilyST14}. In this work, we show that the geometric properties of the constraint set can be used to derive significantly better results. Specifically, we show that a differentially private version of Mirror Descent leads to error bounds of the form $\tilde{O}(G_{\mathcal{C}}/n)$ for a lipschitz loss function, improving on the $\tilde{O}(\sqrt{p}/n)$ bounds in Bassily, Smith and Thakurta 2014. Here $p$ is the dimensionality of the problem, $n$ is the number of data points in the training set, and $G_{\mathcal{C}}$ denotes the Gaussian width of the constraint set that we optimize over. We show similar improvements for strongly convex functions, and for smooth functions. In addition, we show that when the loss function is Lipschitz with respect to the $\ell_1$ norm and $\mathcal{C}$ is $\ell_1$-bounded, a differentially private version of the Frank-Wolfe algorithm gives error bounds of the form $\tilde{O}(n^{-2/3})$. This captures the important and common case of sparse linear regression (LASSO), when the data $x_i$ satisfies $|x_i|_{\infty} \leq 1$ and we optimize over the $\ell_1$ ball. We show new lower bounds for this setting, that together with known bounds, imply that all our upper bounds are tight. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 37,739 |
1910.12707 | Outlining where humans live -- The World Settlement Footprint 2015 | Human settlements are the cause and consequence of most environmental and societal changes on Earth; however, their location and extent is still under debate. We provide here a new 10m resolution (0.32 arc sec) global map of human settlements on Earth for the year 2015, namely the World Settlement Footprint 2015 (WSF2015). The raster dataset has been generated by means of an advanced classification system which, for the first time, jointly exploits open-and-free optical and radar satellite imagery. The WSF2015 has been validated against 900,000 samples labelled by crowdsourcing photointerpretation of very high resolution Google Earth imagery and outperforms all other similar existing layers; in particular, it considerably improves the detection of very small settlements in rural regions and better outlines scattered suburban areas. The dataset can be used at any scale of observation in support to all applications requiring detailed and accurate information on human presence (e.g., socioeconomic development, population distribution, risks assessment, etc.). | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 151,171 |
2204.03507 | Reliable Transiently-Powered Communication | Frequent power failures can introduce significant packet losses during communication among energy harvesting batteryless wireless sensors. Nodes should be aware of the energy level of their neighbors to guarantee the success of communication and avoid wasting energy. This paper presents TRAP (TRAnsiently-powered Protocol) that allows nodes to communicate only if the energy availability on both sides of the communication channel is sufficient before packet transmission. TRAP relies on a novel modulator circuit, which operates without microcontroller intervention and transmits the energy status almost for free over the radiofrequency backscatter channel. Our experimental results showed that TRAP avoids failed transmissions introduced by the power failures and ensures reliable intermittent communication among batteryless sensors. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 290,328 |
1702.05743 | DR2-Net: Deep Residual Reconstruction Network for Image Compressive
Sensing | Most traditional algorithms for compressive sensing image reconstruction suffer from the intensive computation. Recently, deep learning-based reconstruction algorithms have been reported, which dramatically reduce the time complexity than iterative reconstruction algorithms. In this paper, we propose a novel \textbf{D}eep \textbf{R}esidual \textbf{R}econstruction Network (DR$^{2}$-Net) to reconstruct the image from its Compressively Sensed (CS) measurement. The DR$^{2}$-Net is proposed based on two observations: 1) linear mapping could reconstruct a high-quality preliminary image, and 2) residual learning could further improve the reconstruction quality. Accordingly, DR$^{2}$-Net consists of two components, \emph{i.e.,} linear mapping network and residual network, respectively. Specifically, the fully-connected layer in neural network implements the linear mapping network. We then expand the linear mapping network to DR$^{2}$-Net by adding several residual learning blocks to enhance the preliminary image. Extensive experiments demonstrate that the DR$^{2}$-Net outperforms traditional iterative methods and recent deep learning-based methods by large margins at measurement rates 0.01, 0.04, 0.1, and 0.25, respectively. The code of DR$^{2}$-Net has been released on: https://github.com/coldrainyht/caffe\_dr2 | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 68,464 |
1303.5400 | Objection-Based Causal Networks | This paper introduces the notion of objection-based causal networks which resemble probabilistic causal networks except that they are quantified using objections. An objection is a logical sentence and denotes a condition under which a, causal dependency does not exist. Objection-based causal networks enjoy almost all the properties that make probabilistic causal networks popular, with the added advantage that objections are, arguably more intuitive than probabilities. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 23,088 |
2308.09656 | Safe Collision and Clamping Reaction for Parallel Robots During
Human-Robot Collaboration | Parallel robots (PRs) offer the potential for safe human-robot collaboration because of their low moving masses. Due to the in-parallel kinematic chains, the risk of contact in the form of collisions and clamping at a chain increases. Ensuring safety is investigated in this work through various contact reactions on a real planar PR. External forces are estimated based on proprioceptive information and a dynamics model, which allows contact detection. Retraction along the direction of the estimated line of action provides an instantaneous response to limit the occurring contact forces within the experiment to 70N at a maximum velocity 0.4m/s. A reduction in the stiffness of a Cartesian impedance control is investigated as a further strategy. For clamping, a feedforward neural network (FNN) is trained and tested in different joint angle configurations to classify whether a collision or clamping occurs with an accuracy of 80%. A second FNN classifies the clamping kinematic chain to enable a subsequent kinematic projection of the clamping joint angle onto the rotational platform coordinates. In this way, a structure opening is performed in addition to the softer retraction movement. The reaction strategies are compared in real-world experiments at different velocities and controller stiffnesses to demonstrate their effectiveness. The results show that in all collision and clamping experiments the PR terminates the contact in less than 130ms. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 386,377 |
1808.01878 | From traffic conflict simulation to traffic crash simulation:
introducing traffic safety indicators based on the explicit simulation of
potential driver errors | This paper introduces a general simulation framework that can allow the simulation of crashes and the evaluation of consequences on existing microsimulation packages. A specific family of simple and reproducible conflict indicators is proposed and applied to many case studies. In this approach driver failures are simulated by assuming that a driver stops reacting to an external stimulus and keeps driving at the current speed for a given time. The trajectory of the distracted driver vehicle is thus evaluated and projected, for the given time steps, for the established distraction time, over the actual trajectories of other vehicles. Every occurring crash is then evaluated in terms of energy involved in the crash, or with any other severity index (which can be easily calculated since the accident dynamics can be accurately simulated). The simulation of a driver error allows not only the typology of crashes to be included, normally accounted for with surrogate safety measures, but also many other type of typical crashes that it is impossible to simulate with microsimulation and traditional methodologies being caused by vehicles who are driving on non-conflicting trajectories such as drivers speeding at a red light, drivers taking the wrong lane or side of the street or just driving off the road in isolated accidents against external obstacles or traffic barriers. The total crash energy of all crashes is proposed as an indicator of risk and adopted in the case studies. Moreover, the concepts introduced in this paper allow scientists to define other relevant variables that can be used as surrogate safety indicators that consider driving errors. Preliminary results on different case studies have shown a great accordance of safety evaluations with statistical data and empirical expectations and also with other traditional safety indicators that are commonly used in microsimulation. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 104,663 |
1911.10109 | Implementation of Optical Deep Neural Networks using the Fabry-Perot
Interferometer | Future developments in deep learning applications requiring large datasets will be limited by power and speed limitations of silicon based Von-Neumann computing architectures. Optical architectures provide a low power and high speed hardware alternative. Recent publications have suggested promising implementations of optical neural networks (ONNs), showing huge orders of magnitude efficiency and speed gains over current state of the art hardware alternatives. In this work, the transmission of the Fabry-Perot Interferometer (FPI) is proposed as a low power, low footprint activation function unit. Numerical simulations of optical CNNs using the FPI based activation functions show accuracies of 98% on the MNIST dataset. An investigation of possible physical implementation of the network shows that an ONN based on current tunable FPIs could be slowed by actuation delays, but rapidly developing optical hardware fabrication techniques could make an integrated approach using the proposed FPI setups a powerful solution for previously inaccessible deep learning applications. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 154,731 |
2208.06303 | Triple-View Feature Learning for Medical Image Segmentation | Deep learning models, e.g. supervised Encoder-Decoder style networks, exhibit promising performance in medical image segmentation, but come with a high labelling cost. We propose TriSegNet, a semi-supervised semantic segmentation framework. It uses triple-view feature learning on a limited amount of labelled data and a large amount of unlabeled data. The triple-view architecture consists of three pixel-level classifiers and a low-level shared-weight learning module. The model is first initialized with labelled data. Label processing, including data perturbation, confidence label voting and unconfident label detection for annotation, enables the model to train on labelled and unlabeled data simultaneously. The confidence of each model gets improved through the other two views of the feature learning. This process is repeated until each model reaches the same confidence level as its counterparts. This strategy enables triple-view learning of generic medical image datasets. Bespoke overlap-based and boundary-based loss functions are tailored to the different stages of the training. The segmentation results are evaluated on four publicly available benchmark datasets including Ultrasound, CT, MRI, and Histology images. Repeated experiments demonstrate the effectiveness of the proposed network compared against other semi-supervised algorithms, across a large set of evaluation measures. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 312,670 |
2404.01830 | Doubly-Robust Off-Policy Evaluation with Estimated Logging Policy | We introduce a novel doubly-robust (DR) off-policy evaluation (OPE) estimator for Markov decision processes, DRUnknown, designed for situations where both the logging policy and the value function are unknown. The proposed estimator initially estimates the logging policy and then estimates the value function model by minimizing the asymptotic variance of the estimator while considering the estimating effect of the logging policy. When the logging policy model is correctly specified, DRUnknown achieves the smallest asymptotic variance within the class containing existing OPE estimators. When the value function model is also correctly specified, DRUnknown is optimal as its asymptotic variance reaches the semiparametric lower bound. We present experimental results conducted in contextual bandits and reinforcement learning to compare the performance of DRUnknown with that of existing methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 443,604 |
1808.01741 | Logical Semantics and Commonsense Knowledge: Where Did we Go Wrong, and
How to Go Forward, Again | We argue that logical semantics might have faltered due to its failure in distinguishing between two fundamentally very different types of concepts: ontological concepts, that should be types in a strongly-typed ontology, and logical concepts, that are predicates corresponding to properties of and relations between objects of various ontological types. We will then show that accounting for these differences amounts to the integration of lexical and compositional semantics in one coherent framework, and to an embedding in our logical semantics of a strongly-typed ontology that reflects our commonsense view of the world and the way we talk about it in ordinary language. We will show that in such a framework a number of challenges in natural language semantics can be adequately and systematically treated. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 104,639 |
1301.6743 | An Update Semantics for Defeasible Obligations | The deontic logic DUS is a Deontic Update Semantics for prescriptive obligations based on the update semantics of Veltman. In DUS the definition of logical validity of obligations is not based on static truth values but on dynamic action transitions. In this paper prescriptive defeasible obligations are formalized in update semantics and the diagnostic problem of defeasible deontic logic is discussed. Assume a defeasible obligation `normally A ought to be (done)' together withthe fact `A is not (done).' Is this an exception of the normality claim, or is it a violation of the obligation? In this paper we formalize the heuristic principle that it is a violation, unless there is a more specific overriding obligation. The underlying motivation from legal reasoning is that criminals should have as little opportunities as possible to excuse themselves by claiming that their behavior was exceptional rather than criminal. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 21,536 |
1405.7102 | Detection Bank: An Object Detection Based Video Representation for
Multimedia Event Recognition | While low-level image features have proven to be effective representations for visual recognition tasks such as object recognition and scene classification, they are inadequate to capture complex semantic meaning required to solve high-level visual tasks such as multimedia event detection and recognition. Recognition or retrieval of events and activities can be improved if specific discriminative objects are detected in a video sequence. In this paper, we propose an image representation, called Detection Bank, based on the detection images from a large number of windowed object detectors where an image is represented by different statistics derived from these detections. This representation is extended to video by aggregating the key frame level image representations through mean and max pooling. We empirically show that it captures complementary information to state-of-the-art representations such as Spatial Pyramid Matching and Object Bank. These descriptors combined with our Detection Bank representation significantly outperforms any of the representations alone on TRECVID MED 2011 data. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 33,434 |
2408.09822 | SurgicaL-CD: Generating Surgical Images via Unpaired Image Translation
with Latent Consistency Diffusion Models | Computer-assisted surgery (CAS) systems are designed to assist surgeons during procedures, thereby reducing complications and enhancing patient care. Training machine learning models for these systems requires a large corpus of annotated datasets, which is challenging to obtain in the surgical domain due to patient privacy concerns and the significant labeling effort required from doctors. Previous methods have explored unpaired image translation using generative models to create realistic surgical images from simulations. However, these approaches have struggled to produce high-quality, diverse surgical images. In this work, we introduce \emph{SurgicaL-CD}, a consistency-distilled diffusion method to generate realistic surgical images with only a few sampling steps without paired data. We evaluate our approach on three datasets, assessing the generated images in terms of quality and utility as downstream training datasets. Our results demonstrate that our method outperforms GANs and diffusion-based approaches. Our code is available at https://gitlab.com/nct_tso_public/gan2diffusion. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 481,604 |
2003.00229 | User-Level Privacy-Preserving Federated Learning: Analysis and
Performance Optimization | Federated learning (FL), as a type of collaborative machine learning framework, is capable of preserving private data from mobile terminals (MTs) while training the data into useful models. Nevertheless, from a viewpoint of information theory, it is still possible for a curious server to infer private information from the shared models uploaded by MTs. To address this problem, we first make use of the concept of local differential privacy (LDP), and propose a user-level differential privacy (UDP) algorithm by adding artificial noise to the shared models before uploading them to servers. According to our analysis, the UDP framework can realize $(\epsilon_{i}, \delta_{i})$-LDP for the $i$-th MT with adjustable privacy protection levels by varying the variances of the artificial noise processes. We then derive a theoretical convergence upper-bound for the UDP algorithm. It reveals that there exists an optimal number of communication rounds to achieve the best learning performance. More importantly, we propose a communication rounds discounting (CRD) method. Compared with the heuristic search method, the proposed CRD method can achieve a much better trade-off between the computational complexity of searching and the convergence performance. Extensive experiments indicate that our UDP algorithm using the proposed CRD method can effectively improve both the training efficiency and model quality for the given privacy protection levels. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 166,234 |
2105.02796 | Practical and Rigorous Uncertainty Bounds for Gaussian Process
Regression | Gaussian Process Regression is a popular nonparametric regression method based on Bayesian principles that provides uncertainty estimates for its predictions. However, these estimates are of a Bayesian nature, whereas for some important applications, like learning-based control with safety guarantees, frequentist uncertainty bounds are required. Although such rigorous bounds are available for Gaussian Processes, they are too conservative to be useful in applications. This often leads practitioners to replacing these bounds by heuristics, thus breaking all theoretical guarantees. To address this problem, we introduce new uncertainty bounds that are rigorous, yet practically useful at the same time. In particular, the bounds can be explicitly evaluated and are much less conservative than state of the art results. Furthermore, we show that certain model misspecifications lead to only graceful degradation. We demonstrate these advantages and the usefulness of our results for learning-based control with numerical examples. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 233,931 |
1807.11661 | Caging Loops in Shape Embedding Space: Theory and Computation | We propose to synthesize feasible caging grasps for a target object through computing Caging Loops, a closed curve defined in the shape embedding space of the object. Different from the traditional methods, our approach decouples caging loops from the surface geometry of target objects through working in the embedding space. This enables us to synthesize caging loops encompassing multiple topological holes, instead of always tied with one specific handle which could be too small to be graspable by the robot gripper. Our method extracts caging loops through a topological analysis of the distance field defined for the target surface in the embedding space, based on a rigorous theoretical study on the relation between caging loops and the field topology. Due to the decoupling, our method can tolerate incomplete and noisy surface geometry of an unknown target object captured on-the-fly. We implemented our method with a robotic gripper and demonstrate through extensive experiments that our method can synthesize reliable grasps for objects with complex surface geometry and topology and in various scales. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 104,222 |
1708.05812 | Discovery of Visual Semantics by Unsupervised and Self-Supervised
Representation Learning | The success of deep learning in computer vision is rooted in the ability of deep networks to scale up model complexity as demanded by challenging visual tasks. As complexity is increased, so is the need for large amounts of labeled data to train the model. This is associated with a costly human annotation effort. To address this concern, with the long-term goal of leveraging the abundance of cheap unlabeled data, we explore methods of unsupervised "pre-training." In particular, we propose to use self-supervised automatic image colorization. We show that traditional methods for unsupervised learning, such as layer-wise clustering or autoencoders, remain inferior to supervised pre-training. In search for an alternative, we develop a fully automatic image colorization method. Our method sets a new state-of-the-art in revitalizing old black-and-white photography, without requiring human effort or expertise. Additionally, it gives us a method for self-supervised representation learning. In order for the model to appropriately re-color a grayscale object, it must first be able to identify it. This ability, learned entirely self-supervised, can be used to improve other visual tasks, such as classification and semantic segmentation. As a future direction for self-supervision, we investigate if multiple proxy tasks can be combined to improve generalization. This turns out to be a challenging open problem. We hope that our contributions to this endeavor will provide a foundation for future efforts in making self-supervision compete with supervised pre-training. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 79,199 |
2111.12265 | Distribution Estimation to Automate Transformation Policies for
Self-Supervision | In recent visual self-supervision works, an imitated classification objective, called pretext task, is established by assigning labels to transformed or augmented input images. The goal of pretext can be predicting what transformations are applied to the image. However, it is observed that image transformations already present in the dataset might be less effective in learning such self-supervised representations. Building on this observation, we propose a framework based on generative adversarial network to automatically find the transformations which are not present in the input dataset and thus effective for the self-supervised learning. This automated policy allows to estimate the transformation distribution of a dataset and also construct its complementary distribution from which training pairs are sampled for the pretext task. We evaluated our framework using several visual recognition datasets to show the efficacy of our automated transformation policy. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 267,913 |
1711.10765 | Learning nonlinear state-space models using smooth particle-filter-based
likelihood approximations | When classical particle filtering algorithms are used for maximum likelihood parameter estimation in nonlinear state-space models, a key challenge is that estimates of the likelihood function and its derivatives are inherently noisy. The key idea in this paper is to run a particle filter based on a current parameter estimate, but then use the output from this particle filter to re-evaluate the likelihood function approximation also for other parameter values. This results in a (local) deterministic approximation of the likelihood and any standard optimization routine can be applied to find the maximum of this local approximation. By iterating this procedure we eventually arrive at a final parameter estimate. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 85,666 |
2501.02104 | Equivalence of Informations Characterizes Bregman Divergences | Bregman divergences are a class of distance-like comparison functions which play fundamental roles in optimization, statistics, and information theory. One important property of Bregman divergences is that they cause two useful formulations of information content (in the sense of variability or non-uniformity) in a weighted collection of vectors to agree. In this note, we show that this agreement in fact characterizes the class of Bregman divergences; they are the only divergences which generate this agreement for arbitrary collections of weighted vectors. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 522,349 |
2009.14096 | Weakly Supervised-Based Oversampling for High Imbalance and High
Dimensionality Data Classification | With the abundance of industrial datasets, imbalanced classification has become a common problem in several application domains. Oversampling is an effective method to solve imbalanced classification. One of the main challenges of the existing oversampling methods is to accurately label the new synthetic samples. Inaccurate labels of the synthetic samples would distort the distribution of the dataset and possibly worsen the classification performance. This paper introduces the idea of weakly supervised learning to handle the inaccurate labeling of synthetic samples caused by traditional oversampling methods. Graph semi-supervised SMOTE is developed to improve the credibility of the synthetic samples' labels. In addition, we propose cost-sensitive neighborhood components analysis for high dimensional datasets and bootstrap based ensemble framework for highly imbalanced datasets. The proposed method has achieved good classification performance on 8 synthetic datasets and 3 real-world datasets, especially for high imbalance and high dimensionality problems. The average performances and robustness are better than the benchmark methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 197,935 |
2304.07357 | Efficient Incremental Penetration Depth Estimation between Convex
Geometries | Penetration depth (PD) is essential for robotics due to its extensive applications in dynamic simulation, motion planning, haptic rendering, etc. The Expanding Polytope Algorithm (EPA) is the de facto standard for this problem, which estimates PD by expanding an inner polyhedral approximation of an implicit set. In this paper, we propose a novel optimization-based algorithm that incrementally estimates minimum penetration depth and its direction. One major advantage of our method is that it can be warm-started by exploiting the spatial and temporal coherence, which emerges naturally in many robotic applications (e.g., the temporal coherence between adjacent simulation time knots). As a result, our algorithm achieves substantial speedup -- we demonstrate it is 5-30x faster than EPA on several benchmarks. Moreover, our approach is built upon the same implicit geometry representation as EPA, which enables easy integration and deployment into existing software stacks. We also provide an open-source implementation on: https://github.com/weigao95/mind-fcl | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 358,315 |
2310.06916 | Distributed Transfer Learning with 4th Gen Intel Xeon Processors | In this paper, we explore how transfer learning, coupled with Intel Xeon, specifically 4th Gen Intel Xeon scalable processor, defies the conventional belief that training is primarily GPU-dependent. We present a case study where we achieved near state-of-the-art accuracy for image classification on a publicly available Image Classification TensorFlow dataset using Intel Advanced Matrix Extensions(AMX) and distributed training with Horovod. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 398,759 |
2012.10369 | Upper and Lower Bounds on the Performance of Kernel PCA | Principal Component Analysis (PCA) is a popular method for dimension reduction and has attracted an unfailing interest for decades. More recently, kernel PCA (KPCA) has emerged as an extension of PCA but, despite its use in practice, a sound theoretical understanding of KPCA is missing. We contribute several lower and upper bounds on the efficiency of KPCA, involving the empirical eigenvalues of the kernel Gram matrix and new quantities involving a notion of variance. These bounds show how much information is captured by KPCA on average and contribute a better theoretical understanding of its efficiency. We demonstrate that fast convergence rates are achievable for a widely used class of kernels and we highlight the importance of some desirable properties of datasets to ensure KPCA efficiency. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 212,328 |
2305.17768 | AIMS: All-Inclusive Multi-Level Segmentation | Despite the progress of image segmentation for accurate visual entity segmentation, completing the diverse requirements of image editing applications for different-level region-of-interest selections remains unsolved. In this paper, we propose a new task, All-Inclusive Multi-Level Segmentation (AIMS), which segments visual regions into three levels: part, entity, and relation (two entities with some semantic relationships). We also build a unified AIMS model through multi-dataset multi-task training to address the two major challenges of annotation inconsistency and task correlation. Specifically, we propose task complementarity, association, and prompt mask encoder for three-level predictions. Extensive experiments demonstrate the effectiveness and generalization capacity of our method compared to other state-of-the-art methods on a single dataset or the concurrent work on segmenting anything. We will make our code and training model publicly available. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 368,742 |
2103.03447 | User-Centric Cooperative MEC Service Offloading | Mobile edge computing provides users with a cloud environment close to the edge of the wireless network, supporting the computing intensive applications that have low latency requirements. The combination of offloading with the wireless communication brings new challenges. This paper investigates the service caching problem during the long-term service offloading in the user-centric wireless network. To meet the time-varying service demands of a typical user, a cooperative service caching strategy in the unit of the base station (BS) cluster is proposed. We formulate the caching problem as a time-averaged completion delay minimization problem and transform it into time-decoupled instantaneous problems with a virtual caching cost queue at first. Then we propose a distributed algorithm which is based on the consensus-sharing alternating direction method of multipliers to solve each instantaneous problem. The simulations validate that the proposed online distributed service caching algorithm can achieve the optimal time-averaged completion delay of offloading tasks with the smallest caching cost in the unit of a BS cluster. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 223,283 |
2107.01002 | WiCluster: Passive Indoor 2D/3D Positioning using WiFi without Precise
Labels | We introduce WiCluster, a new machine learning (ML) approach for passive indoor positioning using radio frequency (RF) channel state information (CSI). WiCluster can predict both a zone-level position and a precise 2D or 3D position, without using any precise position labels during training. Prior CSI-based indoor positioning work has relied on non-parametric approaches using digital signal-processing (DSP) and, more recently, parametric approaches (e.g., fully supervised ML methods). However these do not handle the complexity of real-world environments well and do not meet requirements for large-scale commercial deployments: the accuracy of DSP-based method deteriorates significantly in non-line-of-sight conditions, while supervised ML methods need large amounts of hard-to-acquire centimeter accuracy position labels. In contrast, WiCluster is precise, requires weaker label-information that can be easily collected, and works well in non-line-of-sight conditions. Our first contribution is a novel dimensionality reduction method for charting. It combines a triplet-loss with a multi-scale clustering-loss to map the high-dimensional CSI representation to a 2D/3D latent space. Our second contribution is two weakly supervised losses that map this latent space into a Cartesian map, resulting in meter-accuracy position results. These losses only require simple to acquire priors: a sketch of the floorplan, approximate access-point locations and a few CSI packets that are labelled with the corresponding zone in the floorplan. Thirdly, we report results and a robustness study for 2D positioning in two single-floor office buildings and 3D positioning in a two-story home. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 244,348 |
2406.08042 | Efficient Network Traffic Feature Sets for IoT Intrusion Detection | The use of Machine Learning (ML) models in cybersecurity solutions requires high-quality data that is stripped of redundant, missing, and noisy information. By selecting the most relevant features, data integrity and model efficiency can be significantly improved. This work evaluates the feature sets provided by a combination of different feature selection methods, namely Information Gain, Chi-Squared Test, Recursive Feature Elimination, Mean Absolute Deviation, and Dispersion Ratio, in multiple IoT network datasets. The influence of the smaller feature sets on both the classification performance and the training time of ML models is compared, with the aim of increasing the computational efficiency of IoT intrusion detection. Overall, the most impactful features of each dataset were identified, and the ML models obtained higher computational efficiency while preserving a good generalization, showing little to no difference between the sets. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 463,328 |
1906.00460 | On The Radon-Nikodym Spectral Approach With Optimal Clustering | Problems of interpolation, classification, and clustering are considered. In the tenets of Radon--Nikodym approach $\langle f(\mathbf{x})\psi^2 \rangle / \langle\psi^2\rangle$, where the $\psi(\mathbf{x})$ is a linear function on input attributes, all the answers are obtained from a generalized eigenproblem $|f|\psi^{[i]}\rangle = \lambda^{[i]} |\psi^{[i]}\rangle$. The solution to the interpolation problem is a regular Radon-Nikodym derivative. The solution to the classification problem requires prior and posterior probabilities that are obtained using the Lebesgue quadrature[1] technique. Whereas in a Bayesian approach new observations change only outcome probabilities, in the Radon-Nikodym approach not only outcome probabilities but also the probability space $|\psi^{[i]}\rangle$ change with new observations. This is a remarkable feature of the approach: both the probabilities and the probability space are constructed from the data. The Lebesgue quadrature technique can be also applied to the optimal clustering problem. The problem is solved by constructing a Gaussian quadrature on the Lebesgue measure. A distinguishing feature of the Radon-Nikodym approach is the knowledge of the invariant group: all the answers are invariant relatively any non-degenerated linear transform of input vector $\mathbf{x}$ components. A software product implementing the algorithms of interpolation, classification, and optimal clustering is available from the authors. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 133,407 |
1707.02786 | Learning to Compose Task-Specific Tree Structures | For years, recursive neural networks (RvNNs) have been shown to be suitable for representing text into fixed-length vectors and achieved good performance on several natural language processing tasks. However, the main drawback of RvNNs is that they require structured input, which makes data preparation and model implementation hard. In this paper, we propose Gumbel Tree-LSTM, a novel tree-structured long short-term memory architecture that learns how to compose task-specific tree structures only from plain text data efficiently. Our model uses Straight-Through Gumbel-Softmax estimator to decide the parent node among candidates dynamically and to calculate gradients of the discrete decision. We evaluate the proposed model on natural language inference and sentiment analysis, and show that our model outperforms or is at least comparable to previous models. We also find that our model converges significantly faster than other models. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 76,757 |
2411.18199 | Semantic Edge Computing and Semantic Communications in 6G Networks: A
Unifying Survey and Research Challenges | Semantic Edge Computing (SEC) and Semantic Communications (SemComs) have been proposed as viable approaches to achieve real-time edge-enabled intelligence in sixth-generation (6G) wireless networks. On one hand, SemCom leverages the strength of Deep Neural Networks (DNNs) to encode and communicate the semantic information only, while making it robust to channel distortions by compensating for wireless effects. Ultimately, this leads to an improvement in the communication efficiency. On the other hand, SEC has leveraged distributed DNNs to divide the computation of a DNN across different devices based on their computational and networking constraints. Although significant progress has been made in both fields, the literature lacks a systematic view to connect both fields. In this work, we fulfill the current gap by unifying the SEC and SemCom fields. We summarize the research problems in these two fields and provide a comprehensive review of the state of the art with a focus on their technical strengths and challenges. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 511,769 |
1707.04202 | Multi-Antenna Assisted Virtual Full-Duplex Relaying with
Reliability-Aware Iterative Decoding | In this paper, a multi-antenna assisted virtual full-duplex (FD) relaying with reliability-aware iterative decoding at destination node is proposed to improve system spectral efficiency and reliability. This scheme enables two half-duplex relay nodes, mimicked as FD relaying, to alternatively serve as transmitter and receiver to relay their decoded data signals regardless the decoding errors, meanwhile, cancel the inter-relay interference with QR-decomposition. Then, by deploying the reliability-aware iterative detection/decoding process, destination node can efficiently mitigate inter-frame interference and error propagation effect at the same time. Simulation results show that, without extra cost of time delay and signalling overhead, our proposed scheme outperforms the conventional selective decode-and-forward (S-DF) relaying schemes, such as cyclic redundancy check based S-DF relaying and threshold based S-DF relaying, by up to 8 dB in terms of bit-error-rate. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 77,001 |
2004.09702 | Heterogeneous Causal Learning for Effectiveness Optimization in User
Marketing | User marketing is a key focus of consumer-based internet companies. Learning algorithms are effective to optimize marketing campaigns which increase user engagement, and facilitates cross-marketing to related products. By attracting users with rewards, marketing methods are effective to boost user activity in the desired products. Rewards incur significant cost that can be off-set by increase in future revenue. Most methodologies rely on churn predictions to prevent losing users to make marketing decisions, which cannot capture up-lift across counterfactual outcomes with business metrics. Other predictive models are capable of estimating heterogeneous treatment effects, but fail to capture the balance of cost versus benefit. We propose a treatment effect optimization methodology for user marketing. This algorithm learns from past experiments and utilizes novel optimization methods to optimize cost efficiency with respect to user selection. The method optimizes decisions using deep learning optimization models to treat and reward users, which is effective in producing cost-effective, impactful marketing campaigns. Our methodology demonstrates superior algorithmic flexibility with integration with deep learning methods and dealing with business constraints. The effectiveness of our model surpasses the quasi-oracle estimation (R-learner) model and causal forests. We also established evaluation metrics that reflect the cost-efficiency and real-world business value. Our proposed constrained and direct optimization algorithms outperform by 24.6% compared with the best performing method in prior art and baseline methods. The methodology is useful in many product scenarios such as optimal treatment allocation and it has been deployed in production world-wide. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 173,420 |
2105.04534 | Improving Fairness of AI Systems with Lossless De-biasing | In today's society, AI systems are increasingly used to make critical decisions such as credit scoring and patient triage. However, great convenience brought by AI systems comes with troubling prevalence of bias against underrepresented groups. Mitigating bias in AI systems to increase overall fairness has emerged as an important challenge. Existing studies on mitigating bias in AI systems focus on eliminating sensitive demographic information embedded in data. Given the temporal and contextual complexity of conceptualizing fairness, lossy treatment of demographic information may contribute to an unnecessary trade-off between accuracy and fairness, especially when demographic attributes and class labels are correlated. In this paper, we present an information-lossless de-biasing technique that targets the scarcity of data in the disadvantaged group. Unlike the existing work, we demonstrate, both theoretically and empirically, that oversampling underrepresented groups can not only mitigate algorithmic bias in AI systems that consistently predict a favorable outcome for a certain group, but improve overall accuracy by mitigating class imbalance within data that leads to a bias towards the majority class. We demonstrate the effectiveness of our technique on real datasets using a variety of fairness metrics. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 234,537 |
2305.09442 | Towards Automatic Identification of Globally Valid Geometric Flat
Outputs via Numerical Optimization | Differential flatness enables efficient planning and control for underactuated robotic systems, but we lack a systematic and practical means of identifying a flat output (or determining whether one exists) for an arbitrary robotic system. In this work, we leverage recent results elucidating the role of symmetry in constructing flat outputs for free-flying robotic systems. Using the tools of Riemannian geometry, Lie group theory, and differential forms, we cast the search for a globally valid, equivariant flat output as an optimization problem. An approximate transcription of this continuum formulation to a quadratic program is performed, and its solutions for two example systems achieve precise agreement with the known closed-form flat outputs. Our results point towards a systematic, automated approach to numerically identify geometric flat outputs directly from the system model, particularly useful when complexity renders pen and paper analysis intractable. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 364,633 |
2107.04952 | Learn from Anywhere: Rethinking Generalized Zero-Shot Learning with
Limited Supervision | A common problem with most zero and few-shot learning approaches is they suffer from bias towards seen classes resulting in sub-optimal performance. Existing efforts aim to utilize unlabeled images from unseen classes (i.e transductive zero-shot) during training to enable generalization. However, this limits their use in practical scenarios where data from target unseen classes is unavailable or infeasible to collect. In this work, we present a practical setting of inductive zero and few-shot learning, where unlabeled images from other out-of-data classes, that do not belong to seen or unseen categories, can be used to improve generalization in any-shot learning. We leverage a formulation based on product-of-experts and introduce a new AUD module that enables us to use unlabeled samples from out-of-data classes which are usually easily available and practically entail no annotation cost. In addition, we also demonstrate the applicability of our model to address a more practical and challenging, Generalized Zero-shot under a limited supervision setting, where even base seen classes do not have sufficient annotated samples. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 245,615 |
2407.07227 | Uncovering the Interaction Equation: Quantifying the Effect of User
Interactions on Social Media Homepage Recommendations | Social media platforms depend on algorithms to select, curate, and deliver content personalized for their users. These algorithms leverage users' past interactions and extensive content libraries to retrieve and rank content that personalizes experiences and boosts engagement. Among various modalities through which this algorithmically curated content may be delivered, the homepage feed is the most prominent. This paper presents a comprehensive study of how prior user interactions influence the content presented on users' homepage feeds across three major platforms: YouTube, Reddit, and X (formerly Twitter). We use a series of carefully designed experiments to gather data capable of uncovering the influence of specific user interactions on homepage content. This study provides insights into the behaviors of the content curation algorithms used by each platform, how they respond to user interactions, and also uncovers evidence of deprioritization of specific topics. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 471,677 |
2202.02423 | Improved Information Theoretic Generalization Bounds for Distributed and
Federated Learning | We consider information-theoretic bounds on expected generalization error for statistical learning problems in a networked setting. In this setting, there are $K$ nodes, each with its own independent dataset, and the models from each node have to be aggregated into a final centralized model. We consider both simple averaging of the models as well as more complicated multi-round algorithms. We give upper bounds on the expected generalization error for a variety of problems, such as those with Bregman divergence or Lipschitz continuous losses, that demonstrate an improved dependence of $1/K$ on the number of nodes. These "per node" bounds are in terms of the mutual information between the training dataset and the trained weights at each node, and are therefore useful in describing the generalization properties inherent to having communication or privacy constraints at each node. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 278,796 |
1905.08869 | Source Localization and Tracking for Dynamic Radio Cartography using
Directional Antennas | Utilization of directional antennas is a promising solution for efficient spectrum sensing and accurate source localization and tracking. Spectrum sensors equipped with directional antennas should constantly scan the space in order to track emitting sources and discover new activities in the area of interest. In this paper, we propose a new formulation that unifies received-signal-strength (RSS) and direction of arrival (DoA) in a compressive sensing (CS) framework. The underlying CS measurement matrix is a function of beamforming vectors of sensors and is referred to as the propagation matrix. Comparing to the omni-directional antenna case, our employed propagation matrix provides more incoherent projections, an essential factor in the compressive sensing theory. Based on the new formulation, we optimize the antenna beams, enhance spectrum sensing efficiency, track active primary users accurately and monitor spectrum activities in an area of interest. In many practical scenarios there is no fusion center to integrate received data from spectrum sensors. We propose the distributed version of our algorithm for such cases. Experimental results show a significant improvement in source localization accuracy, compared with the scenario when sensors are equipped with omni-directional antennas. Applicability of the proposed framework for dynamic radio cartography is shown. Moreover, comparing the estimated dynamic RF map over time with the ground truth demonstrates the effectiveness of our proposed method for accurate signal estimation and recovery. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 131,590 |
2002.03308 | Face Hallucination with Finishing Touches | Obtaining a high-quality frontal face image from a low-resolution (LR) non-frontal face image is primarily important for many facial analysis applications. However, mainstreams either focus on super-resolving near-frontal LR faces or frontalizing non-frontal high-resolution (HR) faces. It is desirable to perform both tasks seamlessly for daily-life unconstrained face images. In this paper, we present a novel Vivid Face Hallucination Generative Adversarial Network (VividGAN) for simultaneously super-resolving and frontalizing tiny non-frontal face images. VividGAN consists of coarse-level and fine-level Face Hallucination Networks (FHnet) and two discriminators, i.e., Coarse-D and Fine-D. The coarse-level FHnet generates a frontal coarse HR face and then the fine-level FHnet makes use of the facial component appearance prior, i.e., fine-grained facial components, to attain a frontal HR face image with authentic details. In the fine-level FHnet, we also design a facial component-aware module that adopts the facial geometry guidance as clues to accurately align and merge the frontal coarse HR face and prior information. Meanwhile, two-level discriminators are designed to capture both the global outline of a face image as well as detailed facial characteristics. The Coarse-D enforces the coarsely hallucinated faces to be upright and complete while the Fine-D focuses on the fine hallucinated ones for sharper details. Extensive experiments demonstrate that our VividGAN achieves photo-realistic frontal HR faces, reaching superior performance in downstream tasks, i.e., face recognition and expression classification, compared with other state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 163,223 |
2310.10762 | Exploring hyperelastic material model discovery for human brain cortex:
multivariate analysis vs. artificial neural network approaches | Traditional computational methods, such as the finite element analysis, have provided valuable insights into uncovering the underlying mechanisms of brain physical behaviors. However, precise predictions of brain physics require effective constitutive models to represent the intricate mechanical properties of brain tissue. In this study, we aimed to identify the most favorable constitutive material model for human brain tissue. To achieve this, we applied artificial neural network and multiple regression methods to a generalization of widely accepted classic models, and compared the results obtained from these two approaches. To evaluate the applicability and efficacy of the model, all setups were kept consistent across both methods, except for the approach to prevent potential overfitting. Our results demonstrate that artificial neural networks are capable of automatically identifying accurate constitutive models from given admissible estimators. Nonetheless, the five-term and two-term neural network models trained under single-mode and multi-mode loading scenarios, were found to be suboptimal and could be further simplified into two-term and single-term, respectively, with higher accuracy using multiple regression. Our findings highlight the importance of hyperparameters for the artificial neural network and emphasize the necessity for detailed cross-validations of regularization parameters to ensure optimal selection at a global level in the development of material constitutive models. This study validates the applicability and accuracy of artificial neural network to automatically discover constitutive material models with proper regularization as well as the benefits in model simplification without compromising accuracy for traditional multivariable regression. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 400,375 |
2406.03144 | A Combination Model for Time Series Prediction using LSTM via Extracting
Dynamic Features Based on Spatial Smoothing and Sequential General
Variational Mode Decomposition | In order to solve the problems such as difficult to extract effective features and low accuracy of sales volume prediction caused by complex relationships such as market sales volume in time series prediction, we proposed a time series prediction method of market sales volume based on Sequential General VMD and spatial smoothing Long short-term memory neural network (SS-LSTM) combination model. Firstly, the spatial smoothing algorithm is used to decompose and calculate the sample data of related industry sectors affected by the linkage effect of market sectors, extracting modal features containing information via Sequential General VMD on overall market and specific price trends; Then, according to the background of different Market data sets, LSTM network is used to model and predict the price of fundamental data and modal characteristics. The experimental results of data prediction with seasonal and periodic trends show that this method can achieve higher price prediction accuracy and more accurate accuracy in specific market contexts compared to traditional prediction methods Describe the changes in market sales volume. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 461,110 |
2102.04172 | Directed particle swarm optimization with Gaussian-process-based
function forecasting | Particle swarm optimization (PSO) is an iterative search method that moves a set of candidate solution around a search-space towards the best known global and local solutions with randomized step lengths. PSO frequently accelerates optimization in practical applications, where gradients are not available and function evaluations expensive. Yet the traditional PSO algorithm ignores the potential knowledge that could have been gained of the objective function from the observations by individual particles. Hence, we draw upon concepts from Bayesian optimization and introduce a stochastic surrogate model of the objective function. That is, we fit a Gaussian process to past evaluations of the objective function, forecast its shape and then adapt the particle movements based on it. Our computational experiments demonstrate that baseline implementations of PSO (i.e., SPSO2011) are outperformed. Furthermore, compared to, state-of-art surrogate-assisted evolutionary algorithms, we achieve substantial performance improvements on several popular benchmark functions. Overall, we find that our algorithm attains desirable properties for exploratory and exploitative behavior. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 219,015 |
1903.01693 | Less is More: Semi-Supervised Causal Inference for Detecting Pathogenic
Users in Social Media | Recent years have witnessed a surge of manipulation of public opinion and political events by malicious social media actors. These users are referred to as "Pathogenic Social Media (PSM)" accounts. PSMs are key users in spreading misinformation in social media to viral proportions. These accounts can be either controlled by real users or automated bots. Identification of PSMs is thus of utmost importance for social media authorities. The burden usually falls to automatic approaches that can identify these accounts and protect social media reputation. However, lack of sufficient labeled examples for devising and training sophisticated approaches to combat these accounts is still one of the foremost challenges facing social media firms. In contrast, unlabeled data is abundant and cheap to obtain thanks to massive user-generated data. In this paper, we propose a semi-supervised causal inference PSM detection framework, SemiPsm, to compensate for the lack of labeled data. In particular, the proposed method leverages unlabeled data in the form of manifold regularization and only relies on cascade information. This is in contrast to the existing approaches that use exhaustive feature engineering (e.g., profile information, network structure, etc.). Evidence from empirical experiments on a real-world ISIS-related dataset from Twitter suggests promising results of utilizing unlabeled instances for detecting PSMs. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 123,319 |
2007.07061 | Polarization in Networks: Identification-alienation Framework | We introduce a model of polarization in networks as a unifying framework for the measurement of polarization that covers a wide range of applications. We consider a sufficiently general setup for this purpose: node- and edge-weighted, undirected, and connected networks. We generalize the axiomatic characterization of Esteban and Ray (1994) and show that only a particular instance within this class can be used justifiably to measure polarization in networks. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 187,213 |
1006.0475 | Prediction with Advice of Unknown Number of Experts | In the framework of prediction with expert advice, we consider a recently introduced kind of regret bounds: the bounds that depend on the effective instead of nominal number of experts. In contrast to the NormalHedge bound, which mainly depends on the effective number of experts and also weakly depends on the nominal one, we obtain a bound that does not contain the nominal number of experts at all. We use the defensive forecasting method and introduce an application of defensive forecasting to multivalued supermartingales. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 6,654 |
2006.05675 | IMUTube: Automatic Extraction of Virtual on-body Accelerometry from
Video for Human Activity Recognition | The lack of large-scale, labeled data sets impedes progress in developing robust and generalized predictive models for on-body sensor-based human activity recognition (HAR). Labeled data in human activity recognition is scarce and hard to come by, as sensor data collection is expensive, and the annotation is time-consuming and error-prone. To address this problem, we introduce IMUTube, an automated processing pipeline that integrates existing computer vision and signal processing techniques to convert videos of human activity into virtual streams of IMU data. These virtual IMU streams represent accelerometry at a wide variety of locations on the human body. We show how the virtually-generated IMU data improves the performance of a variety of models on known HAR datasets. Our initial results are very promising, but the greater promise of this work lies in a collective approach by the computer vision, signal processing, and activity recognition communities to extend this work in ways that we outline. This should lead to on-body, sensor-based HAR becoming yet another success story in large-dataset breakthroughs in recognition. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 181,161 |
1507.07147 | True Online Emphatic TD($\lambda$): Quick Reference and Implementation
Guide | This document is a guide to the implementation of true online emphatic TD($\lambda$), a model-free temporal-difference algorithm for learning to make long-term predictions which combines the emphasis idea (Sutton, Mahmood & White 2015) and the true-online idea (van Seijen & Sutton 2014). The setting used here includes linear function approximation, the possibility of off-policy training, and all the generality of general value functions, as well as the emphasis algorithm's notion of "interest". | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 45,452 |
2405.05190 | Is Transductive Learning Equivalent to PAC Learning? | Much of learning theory is concerned with the design and analysis of probably approximately correct (PAC) learners. The closely related transductive model of learning has recently seen more scrutiny, with its learners often used as precursors to PAC learners. Our goal in this work is to understand and quantify the exact relationship between these two models. First, we observe that modest extensions of existing results show the models to be essentially equivalent for realizable learning for most natural loss functions, up to low order terms in the error and sample complexity. The situation for agnostic learning appears less straightforward, with sample complexities potentially separated by a $\frac{1}{\epsilon}$ factor. This is therefore where our main contributions lie. Our results are two-fold: 1. For agnostic learning with bounded losses (including, for example, multiclass classification), we show that PAC learning reduces to transductive learning at the cost of low-order terms in the error and sample complexity via an adaptation of the reduction of arXiv:2304.09167 to the agnostic setting. 2. For agnostic binary classification, we show the converse: transductive learning is essentially no more difficult than PAC learning. Together with our first result this implies that the PAC and transductive models are essentially equivalent for agnostic binary classification. This is our most technical result, and involves two steps: A symmetrization argument on the agnostic one-inclusion graph (OIG) of arXiv:2309.13692 to derive the worst-case agnostic transductive instance, and expressing the error of the agnostic OIG algorithm for this instance in terms of the empirical Rademacher complexity of the class. We leave as an intriguing open question whether our second result can be extended beyond binary classification to show the transductive and PAC models equivalent more broadly. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 452,829 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.