id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1906.05062 | Unified Semantic Parsing with Weak Supervision | Semantic parsing over multiple knowledge bases enables a parser to exploit structural similarities of programs across the multiple domains. However, the fundamental challenge lies in obtaining high-quality annotations of (utterance, program) pairs across various domains needed for training such models. To overcome this, we propose a novel framework to build a unified multi-domain enabled semantic parser trained only with weak supervision (denotations). Weakly supervised training is particularly arduous as the program search space grows exponentially in a multi-domain setting. To solve this, we incorporate a multi-policy distillation mechanism in which we first train domain-specific semantic parsers (teachers) using weak supervision in the absence of the ground truth programs, followed by training a single unified parser (student) from the domain specific policies obtained from these teachers. The resultant semantic parser is not only compact but also generalizes better, and generates more accurate programs. It further does not require the user to provide a domain label while querying. On the standard Overnight dataset (containing multiple domains), we demonstrate that the proposed model improves performance by 20% in terms of denotation accuracy in comparison to baseline techniques. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 134,918 |
2307.11729 | OUTFOX: LLM-Generated Essay Detection Through In-Context Learning with
Adversarially Generated Examples | Large Language Models (LLMs) have achieved human-level fluency in text generation, making it difficult to distinguish between human-written and LLM-generated texts. This poses a growing risk of misuse of LLMs and demands the development of detectors to identify LLM-generated texts. However, existing detectors lack robustness against attacks: they degrade detection accuracy by simply paraphrasing LLM-generated texts. Furthermore, a malicious user might attempt to deliberately evade the detectors based on detection results, but this has not been assumed in previous studies. In this paper, we propose OUTFOX, a framework that improves the robustness of LLM-generated-text detectors by allowing both the detector and the attacker to consider each other's output. In this framework, the attacker uses the detector's prediction labels as examples for in-context learning and adversarially generates essays that are harder to detect, while the detector uses the adversarially generated essays as examples for in-context learning to learn to detect essays from a strong attacker. Experiments in the domain of student essays show that the proposed detector improves the detection performance on the attacker-generated texts by up to +41.3 points F1-score. Furthermore, the proposed detector shows a state-of-the-art detection performance: up to 96.9 points F1-score, beating existing detectors on non-attacked texts. Finally, the proposed attacker drastically degrades the performance of detectors by up to -57.0 points F1-score, massively outperforming the baseline paraphrasing method for evading detection. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 381,003 |
2109.07260 | Evaluation of Distributed Databases in Hybrid Clouds and Edge Computing:
Energy, Bandwidth, and Storage Consumption | A benchmark study of modern distributed databases is an important source of information to select the right technology for managing data in the cloud-edge paradigms. To make the right decision, it is required to conduct an extensive experimental study on a variety of hardware infrastructures. While most of the state-of-the-art studies have investigated only response time and scalability of distributed databases, focusing on other various metrics (e.g., energy, bandwidth, and storage consumption) is essential to fully understand the resources consumption of the distributed databases. Also, existing studies have explored the response time and scalability of these databases either in private or public cloud. Hence, there is a paucity of investigation into the evaluation of these databases deployed in a hybrid cloud, which is the seamless integration of public and private cloud. To address these research gaps, in this paper, we investigate energy, bandwidth and storage consumption of the most used and common distributed databases. For this purpose, we have evaluated four open-source databases (Cassandra, Mongo, Redis and MySQL) on the hybrid cloud spanning over local OpenStack and Microsoft Azure, and a variety of edge computing nodes including Raspberry Pi, a cluster of Raspberry Pi, and low and high power servers. Our extensive experimental results reveal several helpful insights for the deployment selection of modern distributed databases in edge-cloud environments. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 255,456 |
2203.14155 | How Do We Fail? Stress Testing Perception in Autonomous Vehicles | Autonomous vehicles (AVs) rely on environment perception and behavior prediction to reason about agents in their surroundings. These perception systems must be robust to adverse weather such as rain, fog, and snow. However, validation of these systems is challenging due to their complexity and dependence on observation histories. This paper presents a method for characterizing failures of LiDAR-based perception systems for AVs in adverse weather conditions. We develop a methodology based in reinforcement learning to find likely failures in object tracking and trajectory prediction due to sequences of disturbances. We apply disturbances using a physics-based data augmentation technique for simulating LiDAR point clouds in adverse weather conditions. Experiments performed across a wide range of driving scenarios from a real-world driving dataset show that our proposed approach finds high likelihood failures with smaller input disturbances compared to baselines while remaining computationally tractable. Identified failures can inform future development of robust perception systems for AVs. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 287,901 |
2405.07291 | Robust Beamforming with Gradient-based Liquid Neural Network | Millimeter-wave (mmWave) multiple-input multiple-output (MIMO) communication with the advanced beamforming technologies is a key enabler to meet the growing demands of future mobile communication. However, the dynamic nature of cellular channels in large-scale urban mmWave MIMO communication scenarios brings substantial challenges, particularly in terms of complexity and robustness. To address these issues, we propose a robust gradient-based liquid neural network (GLNN) framework that utilizes ordinary differential equation-based liquid neurons to solve the beamforming problem. Specifically, our proposed GLNN framework takes gradients of the optimization objective function as inputs to extract the high-order channel feature information, and then introduces a residual connection to mitigate the training burden. Furthermore, we use the manifold learning technique to compress the search space of the beamforming problem. These designs enable the GLNN to effectively maintain low complexity while ensuring strong robustness to noisy and highly dynamic channels. Extensive simulation results demonstrate that the GLNN can achieve 4.15% higher spectral efficiency than that of typical iterative algorithms, and reduce the time consumption to only 1.61% that of conventional methods. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 453,652 |
2202.03400 | Private Read Update Write (PRUW) with Storage Constrained Databases | We investigate the problem of private read update write (PRUW) in relation to federated submodel learning (FSL) with storage constrained databases. In PRUW, a user privately reads a submodel from a system of $N$ databases containing $M$ submodels, updates it locally, and writes the update back to the databases without revealing the submodel index or the value of the update. The databases considered in this problem are only allowed to store a given amount of information specified by an arbitrary storage constraint. We provide a storage mechanism that determines the contents of each database prior to the application of the PRUW scheme, such that the total communication cost is minimized. We show that the proposed storage scheme achieves a lower total cost compared to what is achieved by using \emph{coded storage} or \emph{divided storage} to meet the given storage constraint. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | true | 279,190 |
2111.04157 | Extractors: Low Entropy Requirements Colliding With Non-Malleability | The known constructions of negligible error (non-malleable) two-source extractors can be broadly classified in three categories: (1) Constructions where one source has min-entropy rate about $1/2$, the other source can have small min-entropy rate, but the extractor doesn't guarantee non-malleability. (2) Constructions where one source is uniform, and the other can have small min-entropy rate, and the extractor guarantees non-malleability when the uniform source is tampered. (3) Constructions where both sources have entropy rate very close to $1$ and the extractor guarantees non-malleability against the tampering of both sources. We introduce a new notion of collision resistant extractors and in using it we obtain a strong two source non-malleable extractor where we require the first source to have $0.8$ entropy rate and the other source can have min-entropy polylogarithmic in the length of the source. We show how the above extractor can be applied to obtain a non-malleable extractor with output rate $\frac 1 2$, which is optimal. We also show how, by using our extractor and extending the known protocol, one can obtain a privacy amplification secure against memory tampering where the size of the secret output is almost optimal. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 265,405 |
1705.00018 | Stochastic Block Model Reveals the Map of Citation Patterns and Their
Evolution in Time | In this study we map out the large-scale structure of citation networks of science journals and follow their evolution in time by using stochastic block models (SBMs). The SBM fitting procedures are principled methods that can be used to find hierarchical grouping of journals into blocks that show similar incoming and outgoing citations patterns. These methods work directly on the citation network without the need to construct auxiliary networks based on similarity of nodes. We fit the SBMs to the networks of journals we have constructed from the data set of around 630 million citations and find a variety of different types of blocks, such as clusters, bridges, sources, and sinks. In addition we use a recent generalization of SBMs to determine how much a manually curated classification of journals into subfields of science is related to the block structure of the journal network and how this relationship changes in time. The SBM method tries to find a network of blocks that is the best high-level representation of the network of journals, and we illustrate how these block networks (at various levels of resolution) can be used as maps of science. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 72,613 |
1908.02571 | Linking Physicians to Medical Research Results via Knowledge Graph
Embeddings and Twitter | Informing professionals about the latest research results in their field is a particularly important task in the field of health care, since any development in this field directly improves the health status of the patients. Meanwhile, social media is an infrastructure that allows public instant sharing of information, thus it has recently become popular in medical applications. In this study, we apply Multi Distance Knowledge Graph Embeddings (MDE) to link physicians and surgeons to the latest medical breakthroughs that are shared as the research results on Twitter. Our study shows that using this method physicians can be informed about the new findings in their field given that they have an account dedicated to their profession. | false | false | false | true | true | true | true | false | false | false | false | false | false | false | false | false | false | false | 141,035 |
2004.03188 | Increasing the Inference and Learning Speed of Tsetlin Machines with
Clause Indexing | The Tsetlin Machine (TM) is a machine learning algorithm founded on the classical Tsetlin Automaton (TA) and game theory. It further leverages frequent pattern mining and resource allocation principles to extract common patterns in the data, rather than relying on minimizing output error, which is prone to overfitting. Unlike the intertwined nature of pattern representation in neural networks, a TM decomposes problems into self-contained patterns, represented as conjunctive clauses. The clause outputs, in turn, are combined into a classification decision through summation and thresholding, akin to a logistic regression function, however, with binary weights and a unit step output function. In this paper, we exploit this hierarchical structure by introducing a novel algorithm that avoids evaluating the clauses exhaustively. Instead we use a simple look-up table that indexes the clauses on the features that falsify them. In this manner, we can quickly evaluate a large number of clauses through falsification, simply by iterating through the features and using the look-up table to eliminate those clauses that are falsified. The look-up table is further structured so that it facilitates constant time updating, thus supporting use also during learning. We report up to 15 times faster classification and three times faster learning on MNIST and Fashion-MNIST image classification, and IMDb sentiment analysis. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 171,481 |
2409.06445 | Learning Generative Interactive Environments By Trained Agent
Exploration | World models are increasingly pivotal in interpreting and simulating the rules and actions of complex environments. Genie, a recent model, excels at learning from visually diverse environments but relies on costly human-collected data. We observe that their alternative method of using random agents is too limited to explore the environment. We propose to improve the model by employing reinforcement learning based agents for data generation. This approach produces diverse datasets that enhance the model's ability to adapt and perform well across various scenarios and realistic actions within the environment. In this paper, we first release the model GenieRedux - an implementation based on Genie. Additionally, we introduce GenieRedux-G, a variant that uses the agent's readily available actions to factor out action prediction uncertainty during validation. Our evaluation, including a replication of the Coinrun case study, shows that GenieRedux-G achieves superior visual fidelity and controllability using the trained agent exploration. The proposed approach is reproducable, scalable and adaptable to new types of environments. Our codebase is available at https://github.com/insait-institute/GenieRedux . | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 487,129 |
2405.06289 | Look Once to Hear: Target Speech Hearing with Noisy Examples | In crowded settings, the human brain can focus on speech from a target speaker, given prior knowledge of how they sound. We introduce a novel intelligent hearable system that achieves this capability, enabling target speech hearing to ignore all interfering speech and noise, but the target speaker. A naive approach is to require a clean speech example to enroll the target speaker. This is however not well aligned with the hearable application domain since obtaining a clean example is challenging in real world scenarios, creating a unique user interface problem. We present the first enrollment interface where the wearer looks at the target speaker for a few seconds to capture a single, short, highly noisy, binaural example of the target speaker. This noisy example is used for enrollment and subsequent speech extraction in the presence of interfering speakers and noise. Our system achieves a signal quality improvement of 7.01 dB using less than 5 seconds of noisy enrollment audio and can process 8 ms of audio chunks in 6.24 ms on an embedded CPU. Our user studies demonstrate generalization to real-world static and mobile speakers in previously unseen indoor and outdoor multipath environments. Finally, our enrollment interface for noisy examples does not cause performance degradation compared to clean examples, while being convenient and user-friendly. Taking a step back, this paper takes an important step towards enhancing the human auditory perception with artificial intelligence. We provide code and data at: https://github.com/vb000/LookOnceToHear. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 453,245 |
1212.5352 | On the Adaptability of Neural Network Image Super-Resolution | In this paper, we described and developed a framework for Multilayer Perceptron (MLP) to work on low level image processing, where MLP will be used to perform image super-resolution. Meanwhile, MLP are trained with different types of images from various categories, hence analyse the behaviour and performance of the neural network. The tests are carried out using qualitative test, in which Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM). The results showed that MLP trained with single image category can perform reasonably well compared to methods proposed by other researchers. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 20,543 |
2208.02205 | Large-scale Building Damage Assessment using a Novel Hierarchical
Transformer Architecture on Satellite Images | This paper presents \dahitra, a novel deep-learning model with hierarchical transformers to classify building damages based on satellite images in the aftermath of natural disasters. Satellite imagery provides real-time and high-coverage information and offers opportunities to inform large-scale post-disaster building damage assessment, which is critical for rapid emergency response. In this work, a novel transformer-based network is proposed for assessing building damage. This network leverages hierarchical spatial features of multiple resolutions and captures the temporal differences in the feature domain after applying a transformer encoder on the spatial features. The proposed network achieves state-of-the-art performance when tested on a large-scale disaster damage dataset (xBD) for building localization and damage classification, as well as on LEVIR-CD dataset for change detection tasks. In addition, this work introduces a new high-resolution satellite imagery dataset, Ida-BD (related to 2021 Hurricane Ida in Louisiana in 2021) for domain adaptation. Further, it demonstrates an approach of using this dataset by adapting the model with limited fine-tuning and hence applying the model to newly damaged areas with scarce data. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 311,406 |
2012.08074 | An exact solution in Markov decision process with multiplicative rewards
as a general framework | We develop an exactly solvable framework of Markov decision process with a finite horizon, and continuous state and action spaces. We first review the exact solution of conventional linear quadratic regulation with a linear transition and a Gaussian noise, whose optimal policy does not depend on the Gaussian noise, which is an undesired feature in the presence of significant noises. It motivates us to investigate exact solutions which depend on noise. To do so, we generalize the reward accumulation to be a general binary commutative and associative operation. By a new multiplicative accumulation, we obtain an exact solution of optimization assuming linear transitions with a Gaussian noise and the optimal policy is noise dependent in contrast to the additive accumulation. Furthermore, we also show that the multiplicative scheme is a general framework that covers the additive one with an arbitrary precision, which is a model-independent principle. | false | false | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | 211,648 |
2111.00781 | Decentralized Cooperative Reinforcement Learning with Hierarchical
Information Structure | Multi-agent reinforcement learning (MARL) problems are challenging due to information asymmetry. To overcome this challenge, existing methods often require high level of coordination or communication between the agents. We consider two-agent multi-armed bandits (MABs) and Markov decision processes (MDPs) with a hierarchical information structure arising in applications, which we exploit to propose simpler and more efficient algorithms that require no coordination or communication. In the structure, in each step the ``leader" chooses her action first, and then the ``follower" decides his action after observing the leader's action. The two agents observe the same reward (and the same state transition in the MDP setting) that depends on their joint action. For the bandit setting, we propose a hierarchical bandit algorithm that achieves a near-optimal gap-independent regret of $\widetilde{\mathcal{O}}(\sqrt{ABT})$ and a near-optimal gap-dependent regret of $\mathcal{O}(\log(T))$, where $A$ and $B$ are the numbers of actions of the leader and the follower, respectively, and $T$ is the number of steps. We further extend to the case of multiple followers and the case with a deep hierarchy, where we both obtain near-optimal regret bounds. For the MDP setting, we obtain $\widetilde{\mathcal{O}}(\sqrt{H^7S^2ABT})$ regret, where $H$ is the number of steps per episode, $S$ is the number of states, $T$ is the number of episodes. This matches the existing lower bound in terms of $A, B$, and $T$. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | 264,340 |
2407.08583 | The Synergy between Data and Multi-Modal Large Language Models: A Survey
from Co-Development Perspective | The rapid development of large language models (LLMs) has been witnessed in recent years. Based on the powerful LLMs, multi-modal LLMs (MLLMs) extend the modality from text to a broader spectrum of domains, attracting widespread attention due to the broader range of application scenarios. As LLMs and MLLMs rely on vast amounts of model parameters and data to achieve emergent capabilities, the importance of data is receiving increasingly widespread attention and recognition. Tracing and analyzing recent data-oriented works for MLLMs, we find that the development of models and data is not two separate paths but rather interconnected. On the one hand, vaster and higher-quality data contribute to better performance of MLLMs; on the other hand, MLLMs can facilitate the development of data. The co-development of multi-modal data and MLLMs requires a clear view of 1) at which development stages of MLLMs specific data-centric approaches can be employed to enhance certain MLLM capabilities, and 2) how MLLMs, utilizing those capabilities, can contribute to multi-modal data in specific roles. To promote the data-model co-development for MLLM community, we systematically review existing works related to MLLMs from the data-model co-development perspective. A regularly maintained project associated with this survey is accessible at https://github.com/modelscope/data-juicer/blob/main/docs/awesome_llm_data.md. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 472,224 |
1407.0822 | Reducing Offline Evaluation Bias in Recommendation Systems | Recommendation systems have been integrated into the majority of large online systems. They tailor those systems to individual users by filtering and ranking information according to user profiles. This adaptation process influences the way users interact with the system and, as a consequence, increases the difficulty of evaluating a recommendation algorithm with historical data (via offline evaluation). This paper analyses this evaluation bias and proposes a simple item weighting solution that reduces its impact. The efficiency of the proposed solution is evaluated on real world data extracted from Viadeo professional social network. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 34,370 |
1510.03353 | Performance Analysis of Underlay Cognitive Radio Systems:
Estimation-Throughput Tradeoff | In this letter, we study the performance of cognitive Underlay Systems (USs) that employ power control mechanism at the Secondary Transmitter (ST). Existing baseline models considered for the performance analysis either assume the knowledge of involved channels at the ST or retrieve this information by means of a feedback channel, however, such situations hardly exist in practice. Motivated by this fact, we propose a novel approach that incorporates the estimation of the involved channels at the ST, in order to characterize the performance of USs under realistic scenarios. Moreover, we apply an outage constraint that captures the impact of imperfect channel knowledge, particularly on the interference power received at the primary receiver. Besides this, we employ a transmit power constraint at the ST to determine an operating regime for the US. Finally, we analyze an interesting tradeoff between the estimation time and the secondary throughput allowing an optimized performance of the US. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 47,830 |
2309.00369 | Bayesian estimation and reconstruction of marine surface contaminant
dispersion | Discharge of hazardous substances into the marine environment poses a substantial risk to both public health and the ecosystem. In such incidents, it is imperative to accurately estimate the release strength of the source and reconstruct the spatio-temporal dispersion of the substances based on the collected measurements. In this study, we propose an integrated estimation framework to tackle this challenge, which can be used in conjunction with a sensor network or a mobile sensor for environment monitoring. We employ the fundamental convection-diffusion partial differential equation (PDE) to represent the general dispersion of a physical quantity in a non-uniform flow field. The PDE model is spatially discretised into a linear state-space model using the dynamic transient finite-element method (FEM) so that the characterisation of time-varying dispersion can be cast into the problem of inferring the model states from sensor measurements. We also consider imperfect sensing phenomena, including miss-detection and signal quantisation, which are frequently encountered when using a sensor network. This complicated sensor process introduces nonlinearity into the Bayesian estimation process. A Rao-Blackwellised particle filter (RBPF) is designed to provide an effective solution by exploiting the linear structure of the state-space model, whereas the nonlinearity of the measurement model can be handled by Monte Carlo approximation with particles. The proposed framework is validated using a simulated oil spill incident in the Baltic sea with real ocean flow data. The results show the efficacy of the developed spatio-temporal dispersion model and estimation schemes in the presence of imperfect measurements. Moreover, the parameter selection process is discussed, along with some comparison studies to illustrate the advantages of the proposed algorithm over existing methods. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 389,288 |
1804.05990 | Learning Joint Semantic Parsers from Disjoint Data | We present a new approach to learning semantic parsers from multiple datasets, even when the target semantic formalisms are drastically different, and the underlying corpora do not overlap. We handle such "disjoint" data by treating annotations for unobserved formalisms as latent structured variables. Building on state-of-the-art baselines, we show improvements both in frame-semantic parsing and semantic dependency parsing by modeling them jointly. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 95,195 |
2404.10772 | Gaussian Opacity Fields: Efficient Adaptive Surface Reconstruction in
Unbounded Scenes | Recently, 3D Gaussian Splatting (3DGS) has demonstrated impressive novel view synthesis results, while allowing the rendering of high-resolution images in real-time. However, leveraging 3D Gaussians for surface reconstruction poses significant challenges due to the explicit and disconnected nature of 3D Gaussians. In this work, we present Gaussian Opacity Fields (GOF), a novel approach for efficient, high-quality, and adaptive surface reconstruction in unbounded scenes. Our GOF is derived from ray-tracing-based volume rendering of 3D Gaussians, enabling direct geometry extraction from 3D Gaussians by identifying its levelset, without resorting to Poisson reconstruction or TSDF fusion as in previous work. We approximate the surface normal of Gaussians as the normal of the ray-Gaussian intersection plane, enabling the application of regularization that significantly enhances geometry. Furthermore, we develop an efficient geometry extraction method utilizing Marching Tetrahedra, where the tetrahedral grids are induced from 3D Gaussians and thus adapt to the scene's complexity. Our evaluations reveal that GOF surpasses existing 3DGS-based methods in surface reconstruction and novel view synthesis. Further, it compares favorably to or even outperforms, neural implicit methods in both quality and speed. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 447,246 |
2209.11223 | UniColor: A Unified Framework for Multi-Modal Colorization with
Transformer | We propose the first unified framework UniColor to support colorization in multiple modalities, including both unconditional and conditional ones, such as stroke, exemplar, text, and even a mix of them. Rather than learning a separate model for each type of condition, we introduce a two-stage colorization framework for incorporating various conditions into a single model. In the first stage, multi-modal conditions are converted into a common representation of hint points. Particularly, we propose a novel CLIP-based method to convert the text to hint points. In the second stage, we propose a Transformer-based network composed of Chroma-VQGAN and Hybrid-Transformer to generate diverse and high-quality colorization results conditioned on hint points. Both qualitative and quantitative comparisons demonstrate that our method outperforms state-of-the-art methods in every control modality and further enables multi-modal colorization that was not feasible before. Moreover, we design an interactive interface showing the effectiveness of our unified framework in practical usage, including automatic colorization, hybrid-control colorization, local recolorization, and iterative color editing. Our code and models are available at https://luckyhzt.github.io/unicolor. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 319,115 |
1905.11590 | A Review of Semi Supervised Learning Theories and Recent Advances | Semi-supervised learning, which has emerged from the beginning of this century, is a new type of learning method between traditional supervised learning and unsupervised learning. The main idea of semi-supervised learning is to introduce unlabeled samples into the model training process to avoid performance (or model) degeneration due to insufficiency of labeled samples. Semi-supervised learning has been applied successfully in many fields. This paper reviews the development process and main theories of semi-supervised learning, as well as its recent advances and importance in solving real-world problems demonstrated by typical application examples. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 132,475 |
2412.09564 | Improving the Reliability of Cable Broadband Networks via Proactive
Network Maintenance | Cable broadband networks are one of the few "last-mile" broadband technologies widely available in the U.S. Unfortunately, they have poor reliability after decades of deployment. The cable industry proposed a framework called Proactive Network Maintenance (PNM) to diagnose the cable networks. However, there is little public knowledge or systematic study on how to use these data to detect and localize cable network problems. Existing tools in the public domain have prohibitive high false-positive rates. In this paper, we propose CableMon, the first public-domain system that applies machine learning techniques to PNM data to improve the reliability of cable broadband networks. CableMon tackles two key challenges faced by cable ISPs: accurately detecting failures, and distinguishing whether a failure occurs within a network or at a subscriber's premise. CableMon uses statistical models to generate features from time series data and uses customer trouble tickets as hints to infer abnormal/failure thresholds for these generated features. Further, CableMon employs an unsupervised learning model to group cable devices sharing similar anomalous patterns and effectively identify impairments that occur inside a cable network and impairments occur at a subscriber's premise, as these two different faults require different types of technical personnel to repair them. We use eight months of PNM data and customer trouble tickets from an ISP and experimental deployment to evaluate CableMon's performance. Our evaluation results show that CableMon can effectively detect and distinguish failures from PNM data and outperforms existing public-domain tools. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 516,529 |
2412.04665 | ProPLIKS: Probablistic 3D human body pose estimation | We present a novel approach for 3D human pose estimation by employing probabilistic modeling. This approach leverages the advantages of normalizing flows in non-Euclidean geometries to address uncertain poses. Specifically, our method employs normalizing flow tailored to the SO(3) rotational group, incorporating a coupling mechanism based on the M\"obius transformation. This enables the framework to accurately represent any distribution on SO(3), effectively addressing issues related to discontinuities. Additionally, we reinterpret the challenge of reconstructing 3D human figures from 2D pixel-aligned inputs as the task of mapping these inputs to a range of probable poses. This perspective acknowledges the intrinsic ambiguity of the task and facilitates a straightforward integration method for multi-view scenarios. The combination of these strategies showcases the effectiveness of probabilistic models in complex scenarios for human pose estimation techniques. Our approach notably surpasses existing methods in the field of pose estimation. We also validate our methodology on human pose estimation from RGB images as well as medical X-Ray datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 514,507 |
1910.13466 | Ordered Memory | Stack-augmented recurrent neural networks (RNNs) have been of interest to the deep learning community for some time. However, the difficulty of training memory models remains a problem obstructing the widespread use of such models. In this paper, we propose the Ordered Memory architecture. Inspired by Ordered Neurons (Shen et al., 2018), we introduce a new attention-based mechanism and use its cumulative probability to control the writing and erasing operation of the memory. We also introduce a new Gated Recursive Cell to compose lower-level representations into higher-level representation. We demonstrate that our model achieves strong performance on the logical inference task (Bowman et al., 2015)and the ListOps (Nangia and Bowman, 2018) task. We can also interpret the model to retrieve the induced tree structure, and find that these induced structures align with the ground truth. Finally, we evaluate our model on the Stanford SentimentTreebank tasks (Socher et al., 2013), and find that it performs comparatively with the state-of-the-art methods in the literature. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 151,397 |
2310.19590 | Operator Learning Enhanced Physics-informed Neural Networks for Solving
Partial Differential Equations Characterized by Sharp Solutions | Physics-informed Neural Networks (PINNs) have been shown as a promising approach for solving both forward and inverse problems of partial differential equations (PDEs). Meanwhile, the neural operator approach, including methods such as Deep Operator Network (DeepONet) and Fourier neural operator (FNO), has been introduced and extensively employed in approximating solution of PDEs. Nevertheless, to solve problems consisting of sharp solutions poses a significant challenge when employing these two approaches. To address this issue, we propose in this work a novel framework termed Operator Learning Enhanced Physics-informed Neural Networks (OL-PINN). Initially, we utilize DeepONet to learn the solution operator for a set of smooth problems relevant to the PDEs characterized by sharp solutions. Subsequently, we integrate the pre-trained DeepONet with PINN to resolve the target sharp solution problem. We showcase the efficacy of OL-PINN by successfully addressing various problems, such as the nonlinear diffusion-reaction equation, the Burgers equation and the incompressible Navier-Stokes equation at high Reynolds number. Compared with the vanilla PINN, the proposed method requires only a small number of residual points to achieve a strong generalization capability. Moreover, it substantially enhances accuracy, while also ensuring a robust training process. Furthermore, OL-PINN inherits the advantage of PINN for solving inverse problems. To this end, we apply the OL-PINN approach for solving problems with only partial boundary conditions, which usually cannot be solved by the classical numerical methods, showing its capacity in solving ill-posed problems and consequently more complex inverse problems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 404,050 |
2409.09354 | PeriGuru: A Peripheral Robotic Mobile App Operation Assistant based on
GUI Image Understanding and Prompting with LLM | Smartphones have significantly enhanced our daily learning, communication, and entertainment, becoming an essential component of modern life. However, certain populations, including the elderly and individuals with disabilities, encounter challenges in utilizing smartphones, thus necessitating mobile app operation assistants, a.k.a. mobile app agent. With considerations for privacy, permissions, and cross-platform compatibility issues, we endeavor to devise and develop PeriGuru in this work, a peripheral robotic mobile app operation assistant based on GUI image understanding and prompting with Large Language Model (LLM). PeriGuru leverages a suite of computer vision techniques to analyze GUI screenshot images and employs LLM to inform action decisions, which are then executed by robotic arms. PeriGuru achieves a success rate of 81.94% on the test task set, which surpasses by more than double the method without PeriGuru's GUI image interpreting and prompting design. Our code is available on https://github.com/Z2sJ4t/PeriGuru. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 488,286 |
2007.04796 | Training of Deep Learning Neuro-Skin Neural Network | In this brief paper, a learning algorithm is developed for Deep Learning Neuro-Skin Neural Network to improve their learning properties. Neuroskin is a new type of neural network presented recently by the authors. It is comprised of a cellular membrane which has a neuron attached to each cell. The neuron is the cells nucleus. A neuroskin is modelled using finite elements. Each element of the finite element represents a cell. Each cells neuron has dendritic fibers which connects it to the nodes of the cell. On the other hand, its axon is connected to the nodes of a number of different neurons. The neuroskin is trained to contract upon receiving an input. The learning takes place during updating iterations using sensitivity analysis. It is shown that while the neuroskin can not present the desirable response, it improves gradually to the desired level. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 186,478 |
2112.05536 | Rapid manufacturing of color-based hemispherical soft tactile fingertips | Tactile sensing can provide access to information about the contact (i.e. slippage, surface feature, friction), which is out of reach of vision but crucial for manipulation. To access this information, a dense measurement of the deformation of soft fingertips is necessary. Recently, tactile sensors that rely on a camera looking at a deformable membrane have demonstrated that a dense measurement of the contact is possible. However, their manufacturing can be time-consuming and labor-intensive. Here, we show a new design method that uses multi-color additive manufacturing and silicone casting to efficiently manufacture soft marker-based tactile sensors that are able to capture with high-resolution the three-dimensional deformation field at the interface. Each marker is composed of two superimposed color filters. The subtractive color mixing encodes the normal deformation of the membrane, and the lateral deformation is found by centroid detection. With this manufacturing method, we can reach a density of 400 markers on a 21 mm radius hemisphere, allowing for regular and dense measurement of the deformation. We calibrated and validated the approach by finding the curvature of objects with a threefold increase in accuracy as compared to previous implementations. The results demonstrate a simple yet effective approach to manufacturing artificial fingertips for capturing a rich image of the tactile interaction at the location of contact. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 270,871 |
2311.11483 | A Multi-Center Study on the Adaptability of a Shared Foundation Model
for Electronic Health Records | Foundation models hold promise for transforming AI in healthcare by providing modular components that are easily adaptable to downstream healthcare tasks, making AI development more scalable and cost-effective. Structured EHR foundation models, trained on coded medical records from millions of patients, demonstrated benefits including increased performance with fewer training labels, and improved robustness to distribution shifts. However, questions remain on the feasibility of sharing these models across different hospitals and their performance for local task adaptation. This multi-center study examined the adaptability of a recently released structured EHR foundation model ($FM_{SM}$), trained on longitudinal medical record data from 2.57M Stanford Medicine patients. Experiments were conducted using EHR data at The Hospital for Sick Children and MIMIC-IV. We assessed both adaptability via continued pretraining on local data, and task adaptability compared to baselines of training models from scratch at each site, including a local foundation model. We evaluated the performance of these models on 8 clinical prediction tasks. In both datasets, adapting the off-the-shelf $FM_{SM}$ matched the performance of GBM models locally trained on all data while providing a 13% improvement in settings with few task-specific training labels. With continued pretraining on local data, label efficiency substantially improved, such that $FM_{SM}$ required fewer than 1% of training examples to match the fully trained GBM's performance. Continued pretraining was also 60 to 90% more sample-efficient than training local foundation models from scratch. Our findings show that adapting shared EHR foundation models across hospitals provides improved prediction performance at less cost, underscoring the utility of base foundation models as modular components to streamline the development of healthcare AI. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 408,965 |
1909.08549 | Knowledge representation and diagnostic inference using Bayesian
networks in the medical discourse | For the diagnostic inference under uncertainty Bayesian networks are investigated. The method is based on an adequate uniform representation of the necessary knowledge. This includes both generic and experience-based specific knowledge, which is stored in a knowledge base. For knowledge processing, a combination of the problem-solving methods of concept-based and case-based reasoning is used. Concept-based reasoning is used for the diagnosis, therapy and medication recommendation and evaluation of generic knowledge. Exceptions in the form of specific patient cases are processed by case-based reasoning. In addition, the use of Bayesian networks allows to deal with uncertainty, fuzziness and incompleteness. Thus, the valid general concepts can be issued according to their probability. To this end, various inference mechanisms are introduced and subsequently evaluated within the context of a developed prototype. Tests are employed to assess the classification of diagnoses by the network. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 146,004 |
2406.01375 | D-CPT Law: Domain-specific Continual Pre-Training Scaling Law for Large
Language Models | Continual Pre-Training (CPT) on Large Language Models (LLMs) has been widely used to expand the model's fundamental understanding of specific downstream domains (e.g., math and code). For the CPT on domain-specific LLMs, one important question is how to choose the optimal mixture ratio between the general-corpus (e.g., Dolma, Slim-pajama) and the downstream domain-corpus. Existing methods usually adopt laborious human efforts by grid-searching on a set of mixture ratios, which require high GPU training consumption costs. Besides, we cannot guarantee the selected ratio is optimal for the specific domain. To address the limitations of existing methods, inspired by the Scaling Law for performance prediction, we propose to investigate the Scaling Law of the Domain-specific Continual Pre-Training (D-CPT Law) to decide the optimal mixture ratio with acceptable training costs for LLMs of different sizes. Specifically, by fitting the D-CPT Law, we can easily predict the general and downstream performance of arbitrary mixture ratios, model sizes, and dataset sizes using small-scale training costs on limited experiments. Moreover, we also extend our standard D-CPT Law on cross-domain settings and propose the Cross-Domain D-CPT Law to predict the D-CPT law of target domains, where very small training costs (about 1% of the normal training costs) are needed for the target domains. Comprehensive experimental results on six downstream domains demonstrate the effectiveness and generalizability of our proposed D-CPT Law and Cross-Domain D-CPT Law. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 460,287 |
2210.00166 | Automated segmentation of microvessels in intravascular OCT images using
deep learning | To analyze this characteristic of vulnerability, we developed an automated deep learning method for detecting microvessels in intravascular optical coherence tomography (IVOCT) images. A total of 8,403 IVOCT image frames from 85 lesions and 37 normal segments were analyzed. Manual annotation was done using a dedicated software (OCTOPUS) previously developed by our group. Data augmentation in the polar (r,{\theta}) domain was applied to raw IVOCT images to ensure that microvessels appear at all possible angles. Pre-processing methods included guidewire/shadow detection, lumen segmentation, pixel shifting, and noise reduction. DeepLab v3+ was used to segment microvessel candidates. A bounding box on each candidate was classified as either microvessel or non-microvessel using a shallow convolutional neural network. For better classification, we used data augmentation (i.e., angle rotation) on bounding boxes with a microvessel during network training. Data augmentation and pre-processing steps improved microvessel segmentation performance significantly, yielding a method with Dice of 0.71+/-0.10 and pixel-wise sensitivity/specificity of 87.7+/-6.6%/99.8+/-0.1%. The network for classifying microvessels from candidates performed exceptionally well, with sensitivity of 99.5+/-0.3%, specificity of 98.8+/-1.0%, and accuracy of 99.1+/-0.5%. The classification step eliminated the majority of residual false positives, and the Dice coefficient increased from 0.71 to 0.73. In addition, our method produced 698 image frames with microvessels present, compared to 730 from manual analysis, representing a 4.4% difference. When compared to the manual method, the automated method improved microvessel continuity, implying improved segmentation performance. The method will be useful for research purposes as well as potential future treatment planning. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 320,760 |
2401.15668 | Lips Are Lying: Spotting the Temporal Inconsistency between Audio and
Visual in Lip-Syncing DeepFakes | In recent years, DeepFake technology has achieved unprecedented success in high-quality video synthesis, but these methods also pose potential and severe security threats to humanity. DeepFake can be bifurcated into entertainment applications like face swapping and illicit uses such as lip-syncing fraud. However, lip-forgery videos, which neither change identity nor have discernible visual artifacts, present a formidable challenge to existing DeepFake detection methods. Our preliminary experiments have shown that the effectiveness of the existing methods often drastically decrease or even fail when tackling lip-syncing videos. In this paper, for the first time, we propose a novel approach dedicated to lip-forgery identification that exploits the inconsistency between lip movements and audio signals. We also mimic human natural cognition by capturing subtle biological links between lips and head regions to boost accuracy. To better illustrate the effectiveness and advances of our proposed method, we create a high-quality LipSync dataset, AVLips, by employing the state-of-the-art lip generators. We hope this high-quality and diverse dataset could be well served the further research on this challenging and interesting field. Experimental results show that our approach gives an average accuracy of more than 95.3% in spotting lip-syncing videos, significantly outperforming the baselines. Extensive experiments demonstrate the capability to tackle deepfakes and the robustness in surviving diverse input transformations. Our method achieves an accuracy of up to 90.2% in real-world scenarios (e.g., WeChat video call) and shows its powerful capabilities in real scenario deployment. To facilitate the progress of this research community, we release all resources at https://github.com/AaronComo/LipFD. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 424,541 |
2111.13981 | Kilometer-scale autonomous navigation in subarctic forests: challenges
and lessons learned | Challenges inherent to autonomous wintertime navigation in forests include lack of reliable a Global Navigation Satellite System (GNSS) signal, low feature contrast, high illumination variations and changing environment. This type of off-road environment is an extreme case of situations autonomous cars could encounter in northern regions. Thus, it is important to understand the impact of this harsh environment on autonomous navigation systems. To this end, we present a field report analyzing teach-and-repeat navigation in a subarctic forest while subject to fluctuating weather, including light and heavy snow, rain and drizzle. First, we describe the system, which relies on point cloud registration to localize a mobile robot through a boreal forest, while simultaneously building a map. We experimentally evaluate this system in over 18.8 km of autonomous navigation in the teach-and-repeat mode. Over 14 repeat runs, only four manual interventions were required, three of which were due to localization failure and another one caused by battery power outage. We show that dense vegetation perturbs the GNSS signal, rendering it unsuitable for navigation in forest trails. Furthermore, we highlight the increased uncertainty related to localizing using point cloud registration in forest trails. We demonstrate that it is not snow precipitation, but snow accumulation, that affects our system's ability to localize within the environment. Finally, we expose some challenges and lessons learned from our field campaign to support better experimental work in winter conditions. Our dataset is available online. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 268,451 |
1909.04724 | CalBehav: A Machine Learning based Personalized Calendar Behavioral
Model using Time-Series Smartphone Data | The electronic calendar is a valuable resource nowadays for managing our daily life appointments or schedules, also known as events, ranging from professional to highly personal. Researchers have studied various types of calendar events to predict smartphone user behavior for incoming mobile communications. However, these studies typically do not take into account behavioral variations between individuals. In the real world, smartphone users can differ widely from each other in how they respond to incoming communications during their scheduled events. Moreover, an individual user may respond the incoming communications differently in different contexts subject to what type of event is scheduled in her personal calendar. Thus, a static calendar-based behavioral model for individual smartphone users does not necessarily reflect their behavior to the incoming communications. In this paper, we present a machine learning based context-aware model that is personalized and dynamically identifies individual's dominant behavior for their scheduled events using logged time-series smartphone data, and shortly name as ``CalBehav''. The experimental results based on real datasets from calendar and phone logs, show that this data-driven personalized model is more effective for intelligently managing the incoming mobile communications compared to existing calendar-based approaches. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 144,875 |
2311.07217 | Troubles and Failures in Interactional Language. Towards a
Linguistically Informed Taxonomy | The goal of this talk is to introduce a systematic research agenda which aims to understand the nature of interaction between humans and artificial conversational agents (CA) (henceforth humanmachine interaction, HMI). Specifically, we shall take an explicit linguistic perspective focusing on linguistically defined variables that are known to influence the flow of conversations among humans (henceforth human-human interaction, HHI). | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 407,244 |
2104.03313 | SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks | We present SCANimate, an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar. These avatars are driven by pose parameters and have realistic clothing that moves and deforms naturally. SCANimate does not rely on a customized mesh template or surface mesh registration. We observe that fitting a parametric 3D body model, like SMPL, to a clothed human scan is tractable while surface registration of the body topology to the scan is often not, because clothing can deviate significantly from the body shape. We also observe that articulated transformations are invertible, resulting in geometric cycle consistency in the posed and unposed shapes. These observations lead us to a weakly supervised learning method that aligns scans into a canonical pose by disentangling articulated deformations without template-based surface registration. Furthermore, to complete missing regions in the aligned scans while modeling pose-dependent deformations, we introduce a locally pose-aware implicit function that learns to complete and model geometry with learned pose correctives. In contrast to commonly used global pose embeddings, our local pose conditioning significantly reduces long-range spurious correlations and improves generalization to unseen poses, especially when training data is limited. Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar. We demonstrate our approach on various clothing types with different amounts of training data, outperforming existing solutions and other variants in terms of fidelity and generality in every setting. The code is available at https://scanimate.is.tue.mpg.de. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 229,030 |
2303.05267 | Quantum memory error correction computation based on Chamon model | Quantum error correction codes play a central role in the realisation of fault-tolerant quantum computing. Chamon model is a 3D generalization of the toric code. The error correction computation on this model has not been explored so far. In this work, the Chamon model is turned to a non-CSS error correction code. Logical qubits are built by the construct of logical Pauli operators. The property of logical operators reveals the expressions of code distance. According to the topological properties of Chamon models, an error elimination algorithm is proposed. Based on the error elimination algorithm, we propose a global randomized error correction algorithm to decode Chamon models in every single-qubit depolarized channel. This decoding algorithm is improved by adding the pretreatment process, termed the probabilistic greedy local algorithm, which adapts to different kinds of high-dimensional models. The estimated threshold error rate for numerical experiment can be raised to $4.92\%$. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 350,404 |
2410.17143 | Empowering the Grid: Decentralized Autonomous Control for Effective
Utilization and Resilience | With the emergence of low-inertia microgrids powered by inverter-based generation, there remains a concern about the operational resilience of these systems. Grid-forming inverters (GFMs), enabled by various device-level (primary) and system-level (secondary) control methods, are poised to play a significant role in achieving certain operational objectives, such as the effective utilization of clean energy resources while maintaining stability. However, despite the recent advances in GFMs, there is a lack of suitable controls that can ascertain resilience-constrained operations, like maintaining critical operational safety limits during transients under various cyber-physical disruptions. In this work, we develop decentralized autonomous controllers (DACs) that enforce resilience-constrained operation via local, minimally invasive adjustments (e.g., changes in set-points) while co-existing within the hierarchy of existing (primary and secondary) controls. The DACs work autonomously by sensing only local GFM measurements and act only when operational resilience constraints are violated. The proposed DAC scheme is computationally efficient (only algebraic computations), which enables fast, real-time execution and demonstrates the efficacy of the proposed control framework on GridLAB-D-HELICS-based control-grid co-simulations on the IEEE 123-node networked microgrid. Finally, we show how the developed DACs empower the grid by utilizing the available resources entirely to ensure resilience (maintain frequency safe limits). | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 501,328 |
2102.02413 | On Single-User Interactive Beam Alignment in Millimeter Wave Systems:
Impact of Feedback Delay | Narrow beams are key to wireless communications in millimeter wave frequency bands. Beam alignment (BA) allows the base station (BS) to adjust the direction and width of the beam used for communication. During BA, the BS transmits a number of scanning beams covering different angular regions. The goal is to minimize the expected width of the uncertainty region (UR) that includes the angle of departure of the user. Conventionally, in interactive BA, it is assumed that the feedback corresponding to each scanning packet is received prior to transmission of the next one. However, in practice, the feedback delay could be larger because of propagation or system constraints. This paper investigates BA strategies that operate under arbitrary fixed feedback delays. This problem is analyzed through a source coding prospective where the feedback sequences are viewed as source codewords. It is shown that these codewords form a codebook with a particular characteristic which is used to define a new class of codes called d-unimodal codes. By analyzing the properties of these codes, a lower bound on the minimum achievable expected beamwidth is provided. The results reveal potential performance improvements in terms of the BA duration it takes to achieve a fixed expected width of the UR over the state-of-the-art BA methods which do not consider the effect of delay. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 218,400 |
1401.2113 | Latent Sentiment Detection in Online Social Networks: A
Communications-oriented View | In this paper, we consider the problem of latent sentiment detection in Online Social Networks such as Twitter. We demonstrate the benefits of using the underlying social network as an Ising prior to perform network aided sentiment detection. We show that the use of the underlying network results in substantially lower detection error rates compared to strictly features-based detection. In doing so, we introduce a novel communications-oriented framework for characterizing the probability of error, based on information-theoretic analysis. We study the variation of the calculated error exponent for several stylized network topologies such as the complete network, the star network and the closed-chain network, and show the importance of the network structure in determining detection performance. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 29,715 |
2402.18703 | Zero-error communication under discrete-time Markovian dynamics | Consider an open quantum system with (discrete-time) Markovian dynamics. Our task is to store information in the system in such a way that it can be retrieved perfectly, even after the system is left to evolve for an arbitrarily long time. We show that this is impossible for classical (resp. quantum) information precisely when the dynamics is mixing (resp. asymptotically entanglement breaking). Furthermore, we provide tight universal upper bounds on the minimum time after which any such dynamics `scrambles' the encoded information beyond the point of perfect retrieval. On the other hand, for dynamics that are not of this kind, we show that information must be encoded inside the peripheral space associated with the dynamics in order for it to be perfectly recoverable at any time in the future. This allows us to derive explicit formulas for the maximum amount of information that can be protected from noise in terms of the structure of the peripheral space of the dynamics. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 433,525 |
2302.03323 | Towards Efficient Trajectory Generation for Ground Robots beyond 2D
Environment | With the development of robotics, ground robots are no longer limited to planar motion. Passive height variation due to complex terrain and active height control provided by special structures on robots require a more general navigation planning framework beyond 2D. Existing methods rarely considers both simultaneously, limiting the capabilities and applications of ground robots. In this paper, we proposed an optimization-based planning framework for ground robots considering both active and passive height changes on the z-axis. The proposed planner first constructs a penalty field for chassis motion constraints defined in R3 such that the optimal solution space of the trajectory is continuous, resulting in a high-quality smooth chassis trajectory. Also, by constructing custom constraints in the z-axis direction, it is possible to plan trajectories for different types of ground robots which have z-axis degree of freedom. We performed simulations and realworld experiments to verify the efficiency and trajectory quality of our algorithm. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 344,302 |
2212.00059 | Single Slice Thigh CT Muscle Group Segmentation with Domain Adaptation
and Self-Training | Objective: Thigh muscle group segmentation is important for assessment of muscle anatomy, metabolic disease and aging. Many efforts have been put into quantifying muscle tissues with magnetic resonance (MR) imaging including manual annotation of individual muscles. However, leveraging publicly available annotations in MR images to achieve muscle group segmentation on single slice computed tomography (CT) thigh images is challenging. Method: We propose an unsupervised domain adaptation pipeline with self-training to transfer labels from 3D MR to single CT slice. First, we transform the image appearance from MR to CT with CycleGAN and feed the synthesized CT images to a segmenter simultaneously. Single CT slices are divided into hard and easy cohorts based on the entropy of pseudo labels inferenced by the segmenter. After refining easy cohort pseudo labels based on anatomical assumption, self-training with easy and hard splits is applied to fine tune the segmenter. Results: On 152 withheld single CT thigh images, the proposed pipeline achieved a mean Dice of 0.888(0.041) across all muscle groups including sartorius, hamstrings, quadriceps femoris and gracilis. muscles Conclusion: To our best knowledge, this is the first pipeline to achieve thigh imaging domain adaptation from MR to CT. The proposed pipeline is effective and robust in extracting muscle groups on 2D single slice CT thigh images.The container is available for public use at https://github.com/MASILab/DA_CT_muscle_seg | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 333,935 |
1810.00204 | Stochastic 2-D Motion Planning with a POMDP Framework | Motion planning is challenging when it comes to the case of imperfect state information. Decision should be made based on belief state which evolves according to the noise from the system dynamics and sensor measurement. In this paper, we propose the QV-Tree Search algorithm which combines the state-of-art offline and online approximation methods for POMDP. Instead of full node expansions in the tree search, only probable future observations are considered through forward sampling. This modification helps reduce online computation time and allows for GPU acceleration. We show using repre- sentative examples that the proposed QV-Tree Search is able to actively localize the robot in order to reach the goal location with high probability. The results of the proposed method is also compared with the A* and MDP algorithms, neither of which handles state uncertainty directly. The comparison shows that QV-Tree Search is able to drive the robot to the goal with higher success rate and fewer steps. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 109,119 |
1803.00233 | Scalable Dense Non-rigid Structure-from-Motion: A Grassmannian
Perspective | This paper addresses the task of dense non-rigid structure-from-motion (NRSfM) using multiple images. State-of-the-art methods to this problem are often hurdled by scalability, expensive computations, and noisy measurements. Further, recent methods to NRSfM usually either assume a small number of sparse feature points or ignore local non-linearities of shape deformations, and thus cannot reliably model complex non-rigid deformations. To address these issues, in this paper, we propose a new approach for dense NRSfM by modeling the problem on a Grassmann manifold. Specifically, we assume the complex non-rigid deformations lie on a union of local linear subspaces both spatially and temporally. This naturally allows for a compact representation of the complex non-rigid deformation over frames. We provide experimental results on several synthetic and real benchmark datasets. The procured results clearly demonstrate that our method, apart from being scalable and more accurate than state-of-the-art methods, is also more robust to noise and generalizes to highly non-linear deformations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 91,634 |
2307.13466 | Integrating processed-based models and machine learning for crop yield
prediction | Crop yield prediction typically involves the utilization of either theory-driven process-based crop growth models, which have proven to be difficult to calibrate for local conditions, or data-driven machine learning methods, which are known to require large datasets. In this work we investigate potato yield prediction using a hybrid meta-modeling approach. A crop growth model is employed to generate synthetic data for (pre)training a convolutional neural net, which is then fine-tuned with observational data. When applied in silico, our meta-modeling approach yields better predictions than a baseline comprising a purely data-driven approach. When tested on real-world data from field trials (n=303) and commercial fields (n=77), the meta-modeling approach yields competitive results with respect to the crop growth model. In the latter set, however, both models perform worse than a simple linear regression with a hand-picked feature set and dedicated preprocessing designed by domain experts. Our findings indicate the potential of meta-modeling for accurate crop yield prediction; however, further advancements and validation using extensive real-world datasets is recommended to solidify its practical effectiveness. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 381,595 |
2303.14453 | Federated Learning without Full Labels: A Survey | Data privacy has become an increasingly important concern in real-world big data applications such as machine learning. To address the problem, federated learning (FL) has been a promising solution to building effective machine learning models from decentralized and private data. Existing federated learning algorithms mainly tackle the supervised learning problem, where data are assumed to be fully labeled. However, in practice, fully labeled data is often hard to obtain, as the participants may not have sufficient domain expertise, or they lack the motivation and tools to label data. Therefore, the problem of federated learning without full labels is important in real-world FL applications. In this paper, we discuss how the problem can be solved with machine learning techniques that leverage unlabeled data. We present a survey of methods that combine FL with semi-supervised learning, self-supervised learning, and transfer learning methods. We also summarize the datasets used to evaluate FL methods without full labels. Finally, we highlight future directions in the context of FL without full labels. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 354,100 |
1608.03374 | Automatic text extraction and character segmentation using maximally
stable extremal regions | Text detection and segmentation is an important prerequisite for many content based image analysis tasks. The paper proposes a novel text extraction and character segmentation algorithm using Maximally Stable Extremal Regions as basic letter candidates. These regions are then subjected to thresholding and thereafter various connected components are determined to identify separate characters. The algorithm is tested along a set of various JPEG, PNG and BMP images over four different character sets; English, Russian, Hindi and Urdu. The algorithm gives good results for English and Russian character set; however character segmentation in Urdu and Hindi language is not much accurate. The algorithm is simple, efficient, involves no overhead as required in training and gives good results for even low quality images. The paper also proposes various challenges in text extraction and segmentation for multilingual inputs. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 59,668 |
2112.01360 | Probabilistic Approach for Road-Users Detection | Object detection in autonomous driving applications implies that the detection and tracking of semantic objects are commonly native to urban driving environments, as pedestrians and vehicles. One of the major challenges in state-of-the-art deep-learning based object detection are false positives which occur with overconfident scores. This is highly undesirable in autonomous driving and other critical robotic-perception domains because of safety concerns. This paper proposes an approach to alleviate the problem of overconfident predictions by introducing a novel probabilistic layer to deep object detection networks in testing. The suggested approach avoids the traditional Sigmoid or Softmax prediction layer which often produces overconfident predictions. It is demonstrated that the proposed technique reduces overconfidence in the false positives without degrading the performance on the true positives. The approach is validated on the 2D-KITTI objection detection through the YOLOV4 and SECOND (Lidar-based detector). The proposed approach enables interpretable probabilistic predictions without the requirement of re-training the network and therefore is very practical. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 269,457 |
2401.15910 | Correction to "Private Information Retrieval Over Gaussian MAC" | In the above article \cite{shmuel2021private}, the authors introduced a PIR scheme for the Additive White Gaussian Noise (AWGN) Multiple Access Channel (MAC), both with and without fading. The authors utilized the additive nature of the channel and leveraged the linear properties and structure of lattice codes to retrieve the desired message without the servers acquiring any knowledge about the retrieved message's index. Theorems 3 and 4 in \cite{shmuel2021private} contain an error arising from the incorrect usage of the modulo operator. Moreover, the proofs assume a one-to-one mapping function, $\phi(\cdot)$, between a message $W_j\in\mathbb{F}_p^L$ and the elements of $\cC$, mistakenly suggesting that the user possesses all the required information in advance. % \st{However, this is not the case.} \textcolor{black}{To deal with that, we defined $\phi(\cdot)$ as a one-to-one mapping function between a vector of $l$ information bits and a lattice point $\lambda\in\cC$}. Herein, we present the corrected versions of these theorems. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 424,642 |
2410.23962 | Image Synthesis with Class-Aware Semantic Diffusion Models for Surgical
Scene Segmentation | Surgical scene segmentation is essential for enhancing surgical precision, yet it is frequently compromised by the scarcity and imbalance of available data. To address these challenges, semantic image synthesis methods based on generative adversarial networks and diffusion models have been developed. However, these models often yield non-diverse images and fail to capture small, critical tissue classes, limiting their effectiveness. In response, we propose the Class-Aware Semantic Diffusion Model (CASDM), a novel approach which utilizes segmentation maps as conditions for image synthesis to tackle data scarcity and imbalance. Novel class-aware mean squared error and class-aware self-perceptual loss functions have been defined to prioritize critical, less visible classes, thereby enhancing image quality and relevance. Furthermore, to our knowledge, we are the first to generate multi-class segmentation maps using text prompts in a novel fashion to specify their contents. These maps are then used by CASDM to generate surgical scene images, enhancing datasets for training and validating segmentation models. Our evaluation, which assesses both image quality and downstream segmentation performance, demonstrates the strong effectiveness and generalisability of CASDM in producing realistic image-map pairs, significantly advancing surgical scene segmentation across diverse and challenging datasets. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 504,268 |
2202.06498 | Task-Adaptive Feature Transformer with Semantic Enrichment for Few-Shot
Segmentation | Few-shot learning allows machines to classify novel classes using only a few labeled samples. Recently, few-shot segmentation aiming at semantic segmentation on low sample data has also seen great interest. In this paper, we propose a learnable module that can be placed on top of existing segmentation networks for performing few-shot segmentation. This module, called the task-adaptive feature transformer (TAFT), linearly transforms task-specific high-level features to a set of task agnostic features well-suited to conducting few-shot segmentation. The task-conditioned feature transformation allows an effective utilization of the semantic information in novel classes to generate tight segmentation masks. We also propose a semantic enrichment (SE) module that utilizes a pixel-wise attention module for high-level feature and an auxiliary loss from an auxiliary segmentation network conducting the semantic segmentation for all training classes. Experiments on PASCAL-$5^i$ and COCO-$20^i$ datasets confirm that the added modules successfully extend the capability of existing segmentators to yield highly competitive few-shot segmentation performances. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 280,245 |
2407.01471 | Tracking the 2024 US Presidential Election Chatter on Tiktok: A Public
Multimodal Dataset | This paper documents our release of a large-scale data collection of TikTok posts related to the upcoming 2024 U.S. Presidential Election. Our current data comprises 1.8 million videos published between November 1, 2023, and May 26, 2024. Its exploratory analysis identifies the most common keywords, hashtags, and bigrams in both Spanish and English posts, focusing on the election and the two main Presidential candidates, President Joe Biden and Donald Trump. We utilized the TikTok Research API, incorporating various election-related keywords and hashtags, to capture the full scope of relevant content. To address the limitations of the TikTok Research API, we also employed third-party scrapers to expand our dataset. The dataset is publicly available at https://github.com/gabbypinto/US2024PresElectionTikToks | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 469,313 |
2008.07338 | Predicting United States policy outcomes with Random Forests | Two decades of U.S. government legislative outcomes, as well as the policy preferences of rich people, the general population, and diverse interest groups, were captured in a detailed dataset curated and analyzed by Gilens, Page et al. (2014). They found that the preferences of the rich correlated strongly with policy outcomes, while the preferences of the general population did not, except via a linkage with rich people's preferences. Their analysis applied the tools of classical statistical inference, in particular logistic regression. In this paper we analyze the Gilens dataset using the complementary tools of Random Forest classifiers (RFs), from Machine Learning. We present two primary findings, concerning respectively prediction and inference: (i) Holdout test sets can be predicted with approximately 70% balanced accuracy by models that consult only the preferences of rich people and a small number of powerful interest groups, as well as policy area labels. These results include retrodiction, where models trained on pre-1997 cases predicted "future" (post-1997) cases. The 20% gain in accuracy over baseline (chance), in this detailed but noisy dataset, indicates the high importance of a few wealthy players in U.S. policy outcomes, and aligns with a body of research indicating that the U.S. government has significant plutocratic tendencies. (ii) The feature selection methods of RF models identify especially salient subsets of interest groups (economic players). These can be used to further investigate the dynamics of governmental policy making, and also offer an example of the potential value of RF feature selection methods for inference on datasets such as this. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 192,077 |
2111.13185 | Learning Conditional Invariance through Cycle Consistency | Identifying meaningful and independent factors of variation in a dataset is a challenging learning task frequently addressed by means of deep latent variable models. This task can be viewed as learning symmetry transformations preserving the value of a chosen property along latent dimensions. However, existing approaches exhibit severe drawbacks in enforcing the invariance property in the latent space. We address these shortcomings with a novel approach to cycle consistency. Our method involves two separate latent subspaces for the target property and the remaining input information, respectively. In order to enforce invariance as well as sparsity in the latent space, we incorporate semantic knowledge by using cycle consistency constraints relying on property side information. The proposed method is based on the deep information bottleneck and, in contrast to other approaches, allows using continuous target properties and provides inherent model selection capabilities. We demonstrate on synthetic and molecular data that our approach identifies more meaningful factors which lead to sparser and more interpretable models with improved invariance properties. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 268,217 |
2308.12964 | Dense Text-to-Image Generation with Attention Modulation | Existing text-to-image diffusion models struggle to synthesize realistic images given dense captions, where each text prompt provides a detailed description for a specific image region. To address this, we propose DenseDiffusion, a training-free method that adapts a pre-trained text-to-image model to handle such dense captions while offering control over the scene layout. We first analyze the relationship between generated images' layouts and the pre-trained model's intermediate attention maps. Next, we develop an attention modulation method that guides objects to appear in specific regions according to layout guidance. Without requiring additional fine-tuning or datasets, we improve image generation performance given dense captions regarding both automatic and human evaluation scores. In addition, we achieve similar-quality visual results with models specifically trained with layout conditions. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 387,736 |
2502.09970 | Universal Machine Learning Interatomic Potentials are Ready for Solid
Ion Conductors | With the rapid development of energy storage technology, high-performance solid-state electrolytes (SSEs) have become critical for next-generation lithium-ion batteries. These materials require high ionic conductivity, excellent electrochemical stability, and good mechanical properties to meet the demands of electric vehicles and portable electronics. However, traditional methods like density functional theory (DFT) and empirical force fields face challenges such as high computational costs, poor scalability, and limited accuracy across material systems. Universal machine learning interatomic potentials (uMLIPs) offer a promising solution with their efficiency and near-DFT-level accuracy.This study systematically evaluates six advanced uMLIP models (MatterSim, MACE, SevenNet, CHGNet, M3GNet, and ORBFF) in terms of energy, forces, thermodynamic properties, elastic moduli, and lithium-ion diffusion behavior. The results show that MatterSim outperforms others in nearly all metrics, particularly in complex material systems, demonstrating superior accuracy and physical consistency. Other models exhibit significant deviations due to issues like energy inconsistency or insufficient training data coverage.Further analysis reveals that MatterSim achieves excellent agreement with reference values in lithium-ion diffusivity calculations, especially at room temperature. Studies on Li3YCl6 and Li6PS5Cl uncover how crystal structure, anion disorder levels, and Na/Li arrangements influence ionic conductivity. Appropriate S/Cl disorder levels and optimized Na/Li arrangements enhance diffusion pathway connectivity, improving overall ionic transport performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 533,684 |
2306.17690 | Generalized Time Warping Invariant Dictionary Learning for Time Series
Classification and Clustering | Dictionary learning is an effective tool for pattern recognition and classification of time series data. Among various dictionary learning techniques, the dynamic time warping (DTW) is commonly used for dealing with temporal delays, scaling, transformation, and many other kinds of temporal misalignments issues. However, the DTW suffers overfitting or information loss due to its discrete nature in aligning time series data. To address this issue, we propose a generalized time warping invariant dictionary learning algorithm in this paper. Our approach features a generalized time warping operator, which consists of linear combinations of continuous basis functions for facilitating continuous temporal warping. The integration of the proposed operator and the dictionary learning is formulated as an optimization problem, where the block coordinate descent method is employed to jointly optimize warping paths, dictionaries, and sparseness coefficients. The optimized results are then used as hyperspace distance measures to feed classification and clustering algorithms. The superiority of the proposed method in terms of dictionary learning, classification, and clustering is validated through ten sets of public datasets in comparing with various benchmark methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 376,783 |
2402.11819 | Head-wise Shareable Attention for Large Language Models | Large Language Models (LLMs) suffer from huge number of parameters, which restricts their deployment on edge devices. Weight sharing is one promising solution that encourages weight reuse, effectively reducing memory usage with less performance drop. However, current weight sharing techniques primarily focus on small-scale models like BERT and employ coarse-grained sharing rules, e.g., layer-wise. This becomes limiting given the prevalence of LLMs and sharing an entire layer or block obviously diminishes the flexibility of weight sharing. In this paper, we present a perspective on head-wise shareable attention for large language models. We further propose two memory-efficient methods that share parameters across attention heads, with a specific focus on LLMs. Both of them use the same dynamic strategy to select the shared weight matrices. The first method directly reuses the pre-trained weights without retraining, denoted as $\textbf{DirectShare}$. The second method first post-trains with constraint on weight matrix similarity and then shares, denoted as $\textbf{PostShare}$. Experimental results reveal our head-wise shared models still maintain satisfactory capabilities, demonstrating the feasibility of fine-grained weight sharing applied to LLMs. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 430,587 |
2502.02067 | AdaptBot: Combining LLM with Knowledge Graphs and Human Input for
Generic-to-Specific Task Decomposition and Knowledge Refinement | Embodied agents assisting humans are often asked to complete a new task in a new scenario. An agent preparing a particular dish in the kitchen based on a known recipe may be asked to prepare a new dish or to perform cleaning tasks in the storeroom. There may not be sufficient resources, e.g., time or labeled examples, to train the agent for these new situations. Large Language Models (LLMs) trained on considerable knowledge across many domains are able to predict a sequence of abstract actions for such new tasks and scenarios, although it may not be possible for the agent to execute this action sequence due to task-, agent-, or domain-specific constraints. Our framework addresses these challenges by leveraging the generic predictions provided by LLM and the prior domain-specific knowledge encoded in a Knowledge Graph (KG), enabling an agent to quickly adapt to new tasks and scenarios. The robot also solicits and uses human input as needed to refine its existing knowledge. Based on experimental evaluation over cooking and cleaning tasks in simulation domains, we demonstrate that the interplay between LLM, KG, and human input leads to substantial performance gains compared with just using the LLM output. | false | false | false | false | true | false | true | true | true | false | false | false | false | false | false | false | false | false | 530,167 |
1311.5998 | A brief network analysis of Artificial Intelligence publication | In this paper, we present an illustration to the history of Artificial Intelligence(AI) with a statistical analysis of publish since 1940. We collected and mined through the IEEE publish data base to analysis the geological and chronological variance of the activeness of research in AI. The connections between different institutes are showed. The result shows that the leading community of AI research are mainly in the USA, China, the Europe and Japan. The key institutes, authors and the research hotspots are revealed. It is found that the research institutes in the fields like Data Mining, Computer Vision, Pattern Recognition and some other fields of Machine Learning are quite consistent, implying a strong interaction between the community of each field. It is also showed that the research of Electronic Engineering and Industrial or Commercial applications are very active in California. Japan is also publishing a lot of papers in robotics. Due to the limitation of data source, the result might be overly influenced by the number of published articles, which is to our best improved by applying network keynode analysis on the research community instead of merely count the number of publish. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 28,604 |
2104.05239 | Look Closer to Segment Better: Boundary Patch Refinement for Instance
Segmentation | Tremendous efforts have been made on instance segmentation but the mask quality is still not satisfactory. The boundaries of predicted instance masks are usually imprecise due to the low spatial resolution of feature maps and the imbalance problem caused by the extremely low proportion of boundary pixels. To address these issues, we propose a conceptually simple yet effective post-processing refinement framework to improve the boundary quality based on the results of any instance segmentation model, termed BPR. Following the idea of looking closer to segment boundaries better, we extract and refine a series of small boundary patches along the predicted instance boundaries. The refinement is accomplished by a boundary patch refinement network at higher resolution. The proposed BPR framework yields significant improvements over the Mask R-CNN baseline on Cityscapes benchmark, especially on the boundary-aware metrics. Moreover, by applying the BPR framework to the PolyTransform + SegFix baseline, we reached 1st place on the Cityscapes leaderboard. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 229,652 |
2111.06364 | Open Data Fabric: A Decentralized Data Exchange and Transformation
Protocol With Complete Reproducibility and Provenance | Data is the most powerful decision-making tool at our disposal. However, despite the exponentially growing volumes of data generated in the world, putting it to effective use still presents many challenges. Relevant data seems to be never there when it is needed - it remains siloed, hard to find, hard to access, outdated, and of bad quality. As a result, governments, institutions, and businesses remain largely impaired in their ability to make data-driven decisions. At the same time, data science is undergoing a reproducibility crisis. The results of the vast majority of studies cannot be replicated by other researchers, and provenance often cannot be established, even for data used in medical studies that affect lives of millions. We are losing our ability to collaborate at a time when significant improvements to data are badly needed. We believe that the fundamental reason lies in the modern data management processes being entirely at odds with the basic principles of collaboration and trust. Our field needs a fundamental shift of approach in how data is viewed, how it is shared and transformed. We must transition away from treating data as static, from exchanging it as anemic binary blobs, and instead focus on making multi-party data management more sustainable: such as reproducibility, verifiability, provenance, autonomy, and low latency. In this paper, we present the Open Data Fabric, a new decentralized data exchange and transformation protocol designed from the ground up to simplify data management and enable collaboration around data on a similar scale as currently seen in open-source software. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 266,051 |
1909.05024 | Learning to Propagate for Graph Meta-Learning | Meta-learning extracts common knowledge from learning different tasks and uses it for unseen tasks. It can significantly improve tasks that suffer from insufficient training data, e.g., few shot learning. In most meta-learning methods, tasks are implicitly related by sharing parameters or optimizer. In this paper, we show that a meta-learner that explicitly relates tasks on a graph describing the relations of their output dimensions (e.g., classes) can significantly improve few shot learning. The graph's structure is usually free or cheap to obtain but has rarely been explored in previous works. We develop a novel meta-learner of this type for prototype-based classification, in which a prototype is generated for each class, such that the nearest neighbor search among the prototypes produces an accurate classification. The meta-learner, called "Gated Propagation Network (GPN)", learns to propagate messages between prototypes of different classes on the graph, so that learning the prototype of each class benefits from the data of other related classes. In GPN, an attention mechanism aggregates messages from neighboring classes of each class, with a gate choosing between the aggregated message and the message from the class itself. We train GPN on a sequence of tasks from many-shot to few shot generated by subgraph sampling. During training, it is able to reuse and update previously achieved prototypes from the memory in a life-long learning cycle. In experiments, under different training-test discrepancy and test task generation settings, GPN outperforms recent meta-learning methods on two benchmark datasets. The code of GPN and dataset generation is available at https://github.com/liulu112601/Gated-Propagation-Net. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 144,973 |
2104.04238 | Legged Robot State Estimation in Slippery Environments Using Invariant
Extended Kalman Filter with Velocity Update | This paper proposes a state estimator for legged robots operating in slippery environments. An Invariant Extended Kalman Filter (InEKF) is implemented to fuse inertial and velocity measurements from a tracking camera and leg kinematic constraints. {\color{black}The misalignment between the camera and the robot-frame is also modeled thus enabling auto-calibration of camera pose.} The leg kinematics based velocity measurement is formulated as a right-invariant observation. Nonlinear observability analysis shows that other than the rotation around the gravity vector and the absolute position, all states are observable except for some singular cases. Discrete observability analysis demonstrates that our filter is consistent with the underlying nonlinear system. An online noise parameter tuning method is developed to adapt to the highly time-varying camera measurement noise. The proposed method is experimentally validated on a Cassie bipedal robot walking over slippery terrain. A video for the experiment can be found at https://youtu.be/VIqJL0cUr7s. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 229,334 |
1401.4589 | miRNA and Gene Expression based Cancer Classification using Self-
Learning and Co-Training Approaches | miRNA and gene expression profiles have been proved useful for classifying cancer samples. Efficient classifiers have been recently sought and developed. A number of attempts to classify cancer samples using miRNA/gene expression profiles are known in literature. However, the use of semi-supervised learning models have been used recently in bioinformatics, to exploit the huge corpuses of publicly available sets. Using both labeled and unlabeled sets to train sample classifiers, have not been previously considered when gene and miRNA expression sets are used. Moreover, there is a motivation to integrate both miRNA and gene expression for a semi-supervised cancer classification as that provides more information on the characteristics of cancer samples. In this paper, two semi-supervised machine learning approaches, namely self-learning and co-training, are adapted to enhance the quality of cancer sample classification. These approaches exploit the huge public corpuses to enrich the training data. In self-learning, miRNA and gene based classifiers are enhanced independently. While in co-training, both miRNA and gene expression profiles are used simultaneously to provide different views of cancer samples. To our knowledge, it is the first attempt to apply these learning approaches to cancer classification. The approaches were evaluated using breast cancer, hepatocellular carcinoma (HCC) and lung cancer expression sets. Results show up to 20% improvement in F1-measure over Random Forests and SVM classifiers. Co-Training also outperforms Low Density Separation (LDS) approach by around 25% improvement in F1-measure in breast cancer. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 30,086 |
2406.15958 | Bone Fracture Classification using Transfer Learning | The manual examination of X-ray images for fractures is a time-consuming process that is prone to human error. In this work, we introduce a robust yet simple training loop for the classification of fractures, which significantly outperforms existing methods. Our method achieves superior performance in less than ten epochs and utilizes the latest dataset to deliver the best-performing model for this task. We emphasize the importance of training deep learning models responsibly and efficiently, as well as the critical role of selecting high-quality datasets. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 466,935 |
2112.04591 | Variational Regularization in Inverse Problems and Machine Learning | This paper discusses basic results and recent developments on variational regularization methods, as developed for inverse problems. In a typical setup we review basic properties needed to obtain a convergent regularization scheme and further discuss the derivation of quantitative estimates respectively needed ingredients such as Bregman distances for convex functionals. In addition to the approach developed for inverse problems we will also discuss variational regularization in machine learning and work out some connections to the classical regularization theory. In particular we will discuss a reinterpretation of machine learning problems in the framework of regularization theory and a reinterpretation of variational methods for inverse problems in the framework of risk minimization. Moreover, we establish some previously unknown connections between error estimates in Bregman distances and generalization errors. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 270,574 |
1612.06138 | Boosting Neural Machine Translation | Training efficiency is one of the main problems for Neural Machine Translation (NMT). Deep networks need for very large data as well as many training iterations to achieve state-of-the-art performance. This results in very high computation cost, slowing down research and industrialisation. In this paper, we propose to alleviate this problem with several training methods based on data boosting and bootstrap with no modifications to the neural network. It imitates the learning process of humans, which typically spend more time when learning "difficult" concepts than easier ones. We experiment on an English-French translation task showing accuracy improvements of up to 1.63 BLEU while saving 20% of training time. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 65,781 |
2003.09855 | TanhExp: A Smooth Activation Function with High Convergence Speed for
Lightweight Neural Networks | Lightweight or mobile neural networks used for real-time computer vision tasks contain fewer parameters than normal networks, which lead to a constrained performance. In this work, we proposed a novel activation function named Tanh Exponential Activation Function (TanhExp) which can improve the performance for these networks on image classification task significantly. The definition of TanhExp is f(x) = xtanh(e^x). We demonstrate the simplicity, efficiency, and robustness of TanhExp on various datasets and network models and TanhExp outperforms its counterparts in both convergence speed and accuracy. Its behaviour also remains stable even with noise added and dataset altered. We show that without increasing the size of the network, the capacity of lightweight neural networks can be enhanced by TanhExp with only a few training epochs and no extra parameters added. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | true | false | false | 169,164 |
1703.10772 | Joining Hands: Exploiting Monolingual Treebanks for Parsing of
Code-mixing Data | In this paper, we propose efficient and less resource-intensive strategies for parsing of code-mixed data. These strategies are not constrained by in-domain annotations, rather they leverage pre-existing monolingual annotated resources for training. We show that these methods can produce significantly better results as compared to an informed baseline. Besides, we also present a data set of 450 Hindi and English code-mixed tweets of Hindi multilingual speakers for evaluation. The data set is manually annotated with Universal Dependencies. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 70,975 |
2309.06844 | Gpachov at CheckThat! 2023: A Diverse Multi-Approach Ensemble for
Subjectivity Detection in News Articles | The wide-spread use of social networks has given rise to subjective, misleading, and even false information on the Internet. Thus, subjectivity detection can play an important role in ensuring the objectiveness and the quality of a piece of information. This paper presents the solution built by the Gpachov team for the CLEF-2023 CheckThat! lab Task~2 on subjectivity detection. Three different research directions are explored. The first one is based on fine-tuning a sentence embeddings encoder model and dimensionality reduction. The second one explores a sample-efficient few-shot learning model. The third one evaluates fine-tuning a multilingual transformer on an altered dataset, using data from multiple languages. Finally, the three approaches are combined in a simple majority voting ensemble, resulting in 0.77 macro F1 on the test set and achieving 2nd place on the English subtask. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | true | 391,568 |
1511.07677 | A robust extension to the triple plane pressure mode matching method by
filtering convective perturbations | Time-periodic CFD simulations are widely used to investigate turbomachinery components. The triple-plane pressure mode matching method (TPP) developed by Ovenden and Rienstra extracts the acoustic part in such simulations. Experience shows that this method is subject to significant errors when the amplitude of pseudo-sound is high compared to sound. Pseudo-sound are unsteady pressure fluctuations with a convective character. The presented extension to the TPP improves the splitting between acoustics and the rest of the unsteady flow field. The method is simple: i) the acoustic eigenmodes are analytically determined for a uniform mean flow as in the original TPP; ii) the suggested model for convective pressure perturbations uses the convective wavenumber as axial wavenumber and the same orthogonal radial shape functions as for the acoustic modes. The reliability is demonstrated on the simulation data of a low-pressure fan. As acoustic and convective perturbations are separated, the accuracy of the results increases close to sources, allowing a reduction of the computational costs by shortening the simulation domain. The extended method is as robust as the original one--giving the same results for the acoustic modes in absence of convective perturbations. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 49,460 |
2310.14777 | Geographical Erasure in Language Generation | Large language models (LLMs) encode vast amounts of world knowledge. However, since these models are trained on large swaths of internet data, they are at risk of inordinately capturing information about dominant groups. This imbalance can propagate into generated language. In this work, we study and operationalise a form of geographical erasure, wherein language models underpredict certain countries. We demonstrate consistent instances of erasure across a range of LLMs. We discover that erasure strongly correlates with low frequencies of country mentions in the training corpus. Lastly, we mitigate erasure by finetuning using a custom objective. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 402,016 |
2403.09305 | Pushing in the Dark: A Reactive Pushing Strategy for Mobile Robots Using
Tactile Feedback | For mobile robots, navigating cluttered or dynamic environments often necessitates non-prehensile manipulation, particularly when faced with objects that are too large, irregular, or fragile to grasp. The unpredictable behavior and varying physical properties of these objects significantly complicate manipulation tasks. To address this challenge, this manuscript proposes a novel Reactive Pushing Strategy. This strategy allows a mobile robot to dynamically adjust its base movements in real-time to achieve successful pushing maneuvers towards a target location. Notably, our strategy adapts the robot motion based on changes in contact location obtained through the tactile sensor covering the base, avoiding dependence on object-related assumptions and its modeled behavior. The effectiveness of the Reactive Pushing Strategy was initially evaluated in the simulation environment, where it significantly outperformed the compared baseline approaches. Following this, we validated the proposed strategy through real-world experiments, demonstrating the robot capability to push objects to the target points located in the entire vicinity of the robot. In both simulation and real-world experiments, the object-specific properties (shape, mass, friction, inertia) were altered along with the changes in target locations to assess the robustness of the proposed method comprehensively. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 437,715 |
2002.03329 | Better Theory for SGD in the Nonconvex World | Large-scale nonconvex optimization problems are ubiquitous in modern machine learning, and among practitioners interested in solving them, Stochastic Gradient Descent (SGD) reigns supreme. We revisit the analysis of SGD in the nonconvex setting and propose a new variant of the recently introduced expected smoothness assumption which governs the behaviour of the second moment of the stochastic gradient. We show that our assumption is both more general and more reasonable than assumptions made in all prior work. Moreover, our results yield the optimal $\mathcal{O}(\varepsilon^{-4})$ rate for finding a stationary point of nonconvex smooth functions, and recover the optimal $\mathcal{O}(\varepsilon^{-1})$ rate for finding a global solution if the Polyak-{\L}ojasiewicz condition is satisfied. We compare against convergence rates under convexity and prove a theorem on the convergence of SGD under Quadratic Functional Growth and convexity, which might be of independent interest. Moreover, we perform our analysis in a framework which allows for a detailed study of the effects of a wide array of sampling strategies and minibatch sizes for finite-sum optimization problems. We corroborate our theoretical results with experiments on real and synthetic data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 163,233 |
1805.03435 | Decoding Decoders: Finding Optimal Representation Spaces for
Unsupervised Similarity Tasks | Experimental evidence indicates that simple models outperform complex deep networks on many unsupervised similarity tasks. We provide a simple yet rigorous explanation for this behaviour by introducing the concept of an optimal representation space, in which semantically close symbols are mapped to representations that are close under a similarity measure induced by the model's objective function. In addition, we present a straightforward procedure that, without any retraining or architectural modifications, allows deep recurrent models to perform equally well (and sometimes better) when compared to shallow models. To validate our analysis, we conduct a set of consistent empirical evaluations and introduce several new sentence embedding models in the process. Even though this work is presented within the context of natural language processing, the insights are readily applicable to other domains that rely on distributed representations for transfer tasks. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 97,052 |
2012.06919 | Offline Policy Selection under Uncertainty | The presence of uncertainty in policy evaluation significantly complicates the process of policy ranking and selection in real-world settings. We formally consider offline policy selection as learning preferences over a set of policy prospects given a fixed experience dataset. While one can select or rank policies based on point estimates of their policy values or high-confidence intervals, access to the full distribution over one's belief of the policy value enables more flexible selection algorithms under a wider range of downstream evaluation metrics. We propose BayesDICE for estimating this belief distribution in terms of posteriors of distribution correction ratios derived from stochastic constraints (as opposed to explicit likelihood, which is not available). Empirically, BayesDICE is highly competitive to existing state-of-the-art approaches in confidence interval estimation. More importantly, we show how the belief distribution estimated by BayesDICE may be used to rank policies with respect to any arbitrary downstream policy selection metric, and we empirically demonstrate that this selection procedure significantly outperforms existing approaches, such as ranking policies according to mean or high-confidence lower bound value estimates. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 211,270 |
1404.1972 | Regularization for Design | When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given. The task of designing this architecture is now as important as the design of the control laws themselves. By interpreting controller synthesis (in a model matching setup) as the solution of a particular linear inverse problem, we view the challenge of obtaining a controller with a desired architecture as one of finding a structured solution to an inverse problem. Building on this conceptual connection, we formulate and analyze a framework called \textit{Regularization for Design (RFD)}, in which we augment the variational formulations of controller synthesis problems with convex penalty functions that induce a desired controller architecture. The resulting regularized formulations are convex optimization problems that can be solved efficiently, these convex programs provide a unified computationally tractable approach for the simultaneous co-design of a structured optimal controller and the actuation, sensing and communication architecture required to implement it. Further, these problems are natural control-theoretic analogs of prominent approaches such as the Lasso, the Group Lasso, the Elastic Net, and others that are employed in statistical modeling. In analogy to that literature, we show that our approach identifies optimally structured controllers under a suitable condition on a "signal-to-noise" type ratio. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 32,160 |
2308.03417 | PURL: Safe and Effective Sanitization of Link Decoration | While privacy-focused browsers have taken steps to block third-party cookies and mitigate browser fingerprinting, novel tracking techniques that can bypass existing countermeasures continue to emerge. Since trackers need to share information from the client-side to the server-side through link decoration regardless of the tracking technique they employ, a promising orthogonal approach is to detect and sanitize tracking information in decorated links. To this end, we present PURL (pronounced purel-l), a machine-learning approach that leverages a cross-layer graph representation of webpage execution to safely and effectively sanitize link decoration. Our evaluation shows that PURL significantly outperforms existing countermeasures in terms of accuracy and reducing website breakage while being robust to common evasion techniques. PURL's deployment on a sample of top-million websites shows that link decoration is abused for tracking on nearly three-quarters of the websites, often to share cookies, email addresses, and fingerprinting information. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 384,033 |
1701.06331 | Higher-order models capture changes in controllability of temporal
networks | In many complex systems, elements interact via time-varying network topologies. Recent research shows that temporal correlations in the chronological ordering of interactions crucially influence network properties and dynamical processes. How these correlations affect our ability to control systems with time-varying interactions remains unclear. In this work, we use higher-order network models to extend the framework of structural controllability to temporal networks, where the chronological ordering of interactions gives rise to time-respecting paths with non-Markovian characteristics. We study six empirical data sets and show that non-Markovian characteristics of real systems can both increase or decrease the minimum time needed to control the whole system. With both empirical data and synthetic models, we further show that spectral properties of generalisations of graph Laplacians to higher-order networks can be used to analytically capture the effect of temporal correlations on controllability. Our work highlights that (i) correlations in the chronological ordering of interactions are an important source of complexity that significantly influences the controllability of temporal networks, and (ii) higher-order network models are a powerful tool to understand the temporal-topological characteristics of empirical systems. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 67,114 |
1309.6301 | Solving OSCAR regularization problems by proximal splitting algorithms | The OSCAR (octagonal selection and clustering algorithm for regression) regularizer consists of a L_1 norm plus a pair-wise L_inf norm (responsible for its grouping behavior) and was proposed to encourage group sparsity in scenarios where the groups are a priori unknown. The OSCAR regularizer has a non-trivial proximity operator, which limits its applicability. We reformulate this regularizer as a weighted sorted L_1 norm, and propose its grouping proximity operator (GPO) and approximate proximity operator (APO), thus making state-of-the-art proximal splitting algorithms (PSAs) available to solve inverse problems with OSCAR regularization. The GPO is in fact the APO followed by additional grouping and averaging operations, which are costly in time and storage, explaining the reason why algorithms with APO are much faster than that with GPO. The convergences of PSAs with GPO are guaranteed since GPO is an exact proximity operator. Although convergence of PSAs with APO is may not be guaranteed, we have experimentally found that APO behaves similarly to GPO when the regularization parameter of the pair-wise L_inf norm is set to an appropriately small value. Experiments on recovery of group-sparse signals (with unknown groups) show that PSAs with APO are very fast and accurate. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 27,235 |
2101.10002 | Securing Full-Duplex Amplify-and-Forward Relay-Aided Transmissions
Through Processing-Time Optimization | We investigate physical-layer security of the full-duplex (FD) amplify-and-forward (AF) relay channel. We provide a new perspective on the problem and show that the processing time (delay) at the relay can be exploited to improve the system's security. We show that the FD AF relay channel can be seen as an intersymbol-interference (ISI) channel, hence, the discrete-Fourier transform (DFT) can be used for data modulation and demodulation to convert the frequency-selective channel into flat-fading channel per sub-channel/sub-carrier. By exploiting the fact that the channel memory needs to be cleared by inserting the cyclic-prefix, Alice injects an artificial-noise (AN) signal that hurts the eavesdropping nodes only. The strength of this AN signal and its interference rank are controlled by the relay's processing time. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 216,789 |
2410.01984 | A Preventive-Corrective Scheme for Ensuring Power System Security During
Active Wildfire Risks | The focus of this paper is on operating the electric power grid in a secure manner when wildfire risks are high. This is a challenging problem because of the uncertain ways in which the fires can impact the operation of the power system. To address this challenge, we propose a novel preventive-corrective coordinated decision-making scheme that quickly mitigates both static and dynamic insecurities given the risk of active wildfires in a region. The scheme utilizes a comprehensive contingency analysis tool for multi-asset outages that leverages: (i) a Feasibility Test algorithm which exhaustively desaturates overloaded cut-sets to prevent cascading line outages, and (ii) a data-driven transient stability analyzer which alleviates dynamic instabilities. This tool is then used to operate a coordinated unit commitment/optimal power flow model that is designed to adapt to varying risk levels associated with wildfires. Depending on the allowed risk, the model balances economical operation and grid robustness. The results obtained using the IEEE 118-bus system indicate that the proposed approach alleviates system vulnerabilities to wildfires while also minimizing operational cost. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 494,044 |
2110.10575 | SocialVisTUM: An Interactive Visualization Toolkit for Correlated Neural
Topic Models on Social Media Opinion Mining | Recent research in opinion mining proposed word embedding-based topic modeling methods that provide superior coherence compared to traditional topic modeling. In this paper, we demonstrate how these methods can be used to display correlated topic models on social media texts using SocialVisTUM, our proposed interactive visualization toolkit. It displays a graph with topics as nodes and their correlations as edges. Further details are displayed interactively to support the exploration of large text collections, e.g., representative words and sentences of topics, topic and sentiment distributions, hierarchical topic clustering, and customizable, predefined topic labels. The toolkit optimizes automatically on custom data for optimal coherence. We show a working instance of the toolkit on data crawled from English social media discussions about organic food consumption. The visualization confirms findings of a qualitative consumer research study. SocialVisTUM and its training procedures are accessible online. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 262,203 |
1506.01911 | Beyond Temporal Pooling: Recurrence and Temporal Convolutions for
Gesture Recognition in Video | Recent studies have demonstrated the power of recurrent neural networks for machine translation, image captioning and speech recognition. For the task of capturing temporal structure in video, however, there still remain numerous open research questions. Current research suggests using a simple temporal feature pooling strategy to take into account the temporal aspect of video. We demonstrate that this method is not sufficient for gesture recognition, where temporal information is more discriminative compared to general video classification tasks. We explore deep architectures for gesture recognition in video and propose a new end-to-end trainable neural network architecture incorporating temporal convolutions and bidirectional recurrence. Our main contributions are twofold; first, we show that recurrence is crucial for this task; second, we show that adding temporal convolutions leads to significant improvements. We evaluate the different approaches on the Montalbano gesture recognition dataset, where we achieve state-of-the-art results. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | true | false | false | 43,835 |
1406.7128 | On a new formulation of nonlocal image filters involving the relative
rearrangement | Nonlocal filters are simple and powerful techniques for image denoising. In this paper we study the reformulation of a broad class of nonlocal filters in terms of two functional rearrangements: the decreasing and the relative rearrangements. Independently of the dimension of the image, we reformulate these filters as integral operators defined in a one-dimensional space corresponding to the level sets measures. We prove the equivalency between the original and the rearranged versions of the filters and propose a discretization in terms of constant-wise interpolators, which we prove to be convergent to the solution of the continuous setting. For some particular cases, this new formulation allows us to perform a detailed analysis of the filtering properties. Among others, we prove that the filtered image is a contrast change of the original image, and that the filtering procedure behaves asymptotically as a shock filter combined with a border diffusive term, responsible for the staircaising effect and the loss of contrast. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 34,187 |
1706.09395 | Recovery of Missing Samples Using Sparse Approximation via a Convex
Similarity Measure | In this paper, we study the missing sample recovery problem using methods based on sparse approximation. In this regard, we investigate the algorithms used for solving the inverse problem associated with the restoration of missed samples of image signal. This problem is also known as inpainting in the context of image processing and for this purpose, we suggest an iterative sparse recovery algorithm based on constrained $l_1$-norm minimization with a new fidelity metric. The proposed metric called Convex SIMilarity (CSIM) index, is a simplified version of the Structural SIMilarity (SSIM) index, which is convex and error-sensitive. The optimization problem incorporating this criterion, is then solved via Alternating Direction Method of Multipliers (ADMM). Simulation results show the efficiency of the proposed method for missing sample recovery of 1D patch vectors and inpainting of 2D image signals. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 76,133 |
2202.00882 | MPVNN: Mutated Pathway Visible Neural Network Architecture for
Interpretable Prediction of Cancer-specific Survival Risk | Survival risk prediction using gene expression data is important in making treatment decisions in cancer. Standard neural network (NN) survival analysis models are black boxes with lack of interpretability. More interpretable visible neural network (VNN) architectures are designed using biological pathway knowledge. But they do not model how pathway structures can change for particular cancer types. We propose a novel Mutated Pathway VNN or MPVNN architecture, designed using prior signaling pathway knowledge and gene mutation data-based edge randomization simulating signal flow disruption. As a case study, we use the PI3K-Akt pathway and demonstrate overall improved cancer-specific survival risk prediction results of MPVNN over standard non-NN and other similar sized NN survival analysis methods. We show that trained MPVNN architecture interpretation, which points to smaller sets of genes connected by signal flow within the PI3K-Akt pathway that are important in risk prediction for particular cancer types, is reliable. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 278,295 |
2003.05259 | Capturing document context inside sentence-level neural machine
translation models with self-training | Neural machine translation (NMT) has arguably achieved human level parity when trained and evaluated at the sentence-level. Document-level neural machine translation has received less attention and lags behind its sentence-level counterpart. The majority of the proposed document-level approaches investigate ways of conditioning the model on several source or target sentences to capture document context. These approaches require training a specialized NMT model from scratch on parallel document-level corpora. We propose an approach that doesn't require training a specialized model on parallel document-level corpora and is applied to a trained sentence-level NMT model at decoding time. We process the document from left to right multiple times and self-train the sentence-level model on pairs of source sentences and generated translations. Our approach reinforces the choices made by the model, thus making it more likely that the same choices will be made in other sentences in the document. We evaluate our approach on three document-level datasets: NIST Chinese-English, WMT'19 Chinese-English and OpenSubtitles English-Russian. We demonstrate that our approach has higher BLEU score and higher human preference than the baseline. Qualitative analysis of our approach shows that choices made by model are consistent across the document. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 167,820 |
1905.03670 | S4L: Self-Supervised Semi-Supervised Learning | This work tackles the problem of semi-supervised learning of image classifiers. Our main insight is that the field of semi-supervised learning can benefit from the quickly advancing field of self-supervised visual representation learning. Unifying these two approaches, we propose the framework of self-supervised semi-supervised learning and use it to derive two novel semi-supervised image classification methods. We demonstrate the effectiveness of these methods in comparison to both carefully tuned baselines, and existing semi-supervised learning methods. We then show that our approach and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi-supervised ILSVRC-2012 with 10% of labels. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 130,249 |
2002.09843 | An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks
in Federated Learning | Although federated learning improves privacy of training data by exchanging local gradients or parameters rather than raw data, the adversary still can leverage local gradients and parameters to obtain local training data by launching reconstruction and membership inference attacks. To defend such privacy attacks, many noises perturbation methods (like differential privacy or CountSketch matrix) have been widely designed. However, the strong defence ability and high learning accuracy of these schemes cannot be ensured at the same time, which will impede the wide application of FL in practice (especially for medical or financial institutions that require both high accuracy and strong privacy guarantee). To overcome this issue, in this paper, we propose \emph{an efficient model perturbation method for federated learning} to defend reconstruction and membership inference attacks launched by curious clients. On the one hand, similar to the differential privacy, our method also selects random numbers as perturbed noises added to the global model parameters, and thus it is very efficient and easy to be integrated in practice. Meanwhile, the random selected noises are positive real numbers and the corresponding value can be arbitrarily large, and thus the strong defence ability can be ensured. On the other hand, unlike differential privacy or other perturbation methods that cannot eliminate the added noises, our method allows the server to recover the true gradients by eliminating the added noises. Therefore, our method does not hinder learning accuracy at all. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 165,202 |
2204.01721 | Meta-Learning Approaches for a One-Shot Collective-Decision Aggregation:
Correctly Choosing how to Choose Correctly | Aggregating successfully the choices regarding a given decision problem made by the multiple collective members into a single solution is essential for exploiting the collective's intelligence and for effective crowdsourcing. There are various aggregation techniques, some of which come down to a simple and sometimes effective deterministic aggregation rule. However, it has been shown that the efficiency of those techniques is unstable under varying conditions and within different domains. Other methods mainly rely on learning from the decision-makers previous responses or the availability of additional information about them. In this study, we present two one-shot machine-learning-based aggregation approaches. The first predicts, given multiple features about the collective's choices, including meta-cognitive ones, which aggregation method will be best for a given case. The second directly predicts which decision is optimal, given, among other things, the selection made by each method. We offer a meta-cognitive feature-engineering approach for characterizing a collective decision-making case in a context-sensitive fashion. In addition, we offer a new aggregation method, the Devil's-Advocate aggregator, to deal with cases in which standard aggregation methods are predicted to fail. Experimental results show that using either of our proposed approaches increases the percentage of successfully aggregated cases (i.e., cases in which the correct answer is returned) significantly, compared to the uniform application of each rule-based aggregation method. We also demonstrate the importance of the Devil's Advocate aggregator. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 289,707 |
2305.19455 | Implementation of a framework for deploying AI inference engines in
FPGAs | The LCLS2 Free Electron Laser FEL will generate xray pulses to beamline experiments at up to 1Mhz These experimentals will require new ultrahigh rate UHR detectors that can operate at rates above 100 kHz and generate data throughputs upwards of 1 TBs a data velocity which requires prohibitively large investments in storage infrastructure Machine Learning has demonstrated the potential to digest large datasets to extract relevant insights however current implementations show latencies that are too high for realtime data reduction objectives SLAC has endeavored on the creation of a software framework which translates MLs structures for deployment on Field Programmable Gate Arrays FPGAs deployed at the Edge of the data chain close to the instrumentation This framework leverages Xilinxs HLS framework presenting an API modeled after the open source Keras interface to the TensorFlow library This SLAC Neural Network Library SNL framework is designed with a streaming data approach optimizing the data flow between layers while minimizing the buffer data buffering requirements The goal is to ensure the highest possible framerate while keeping the maximum latency constrained to the needs of the experiment Our framework is designed to ensure the RTL implementation of the network layers supporting full redeployment of weights and biases without requiring resynthesis after training The ability to reduce the precision of the implemented networks through quantization is necessary to optimize the use of both DSP and memory resources in the FPGA We currently have a preliminary version of the toolset and are experimenting with both general purpose example networks and networks being designed for specific LCLS2 experiments. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 369,531 |
2303.08513 | On the number of subproblem iterations per coupling step in partitioned
fluid-structure interaction simulations | In literature, the cost of a partitioned fluid-structure interaction scheme is typically assessed by the number of coupling iterations required per time step, while ignoring the internal iterations within the nonlinear subproblems. In this work, we demonstrate that these internal iterations have a significant influence on the computational cost of the coupled simulation. Particular attention is paid to how limiting the number of iterations within each solver call can shorten the overall run time, as it avoids polishing the subproblem solution using unconverged coupling data. Based on systematic parameter studies, we investigate the optimal number of subproblem iterations per coupling step. Lastly, this work proposes a new convergence criterion for coupled systems that is based on the residuals of the subproblems and therefore does not require any additional convergence tolerance for the coupling loop. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 351,671 |
2411.17559 | Degrees of Freedom of Cache-Aided Interference Channels Assisted by
Active Intelligent Reflecting Surfaces | This paper studies cache-aided wireless networks in the presence of active intelligent reflecting surfaces (IRS) from an information-theoretic perspective. Specifically, we explore interference management in a cache-aided wireless network assisted by an active IRS, to enhance the achievable degrees of freedom (DoF). To this end, we jointly design the content placement, delivery phase, and phase shifts of the IRS and propose a one-shot achievable scheme. Our scheme exploits transmitters' cooperation, cache contents (as side information), interference alignment, and IRS capabilities, adapting to the network's parameters. We derive the achievable one-shot sum-DoF for different sizes of cache memories, network configurations, and numbers of IRS elements. Our results highlight the potential of deploying an IRS in cache-aided wireless communication systems, underscoring the enhancement of achievable DoF for various parameter regimes, particularly when the sizes of the caches (especially at the transmitters) are inadequate. Notably, we show that access to an IRS with a sufficient number of elements enables the achievement of the maximum possible DoF for various parameter regimes of interest. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 511,500 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.