id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2002.11386 | Polar-Slotted ALOHA over Slot Erasure Channels | In this paper, we design a new polar slotted ALOHA (PSA) protocol over the slot erasure channels, which uses polar coding to construct the identical slot pattern (SP) assembles within each active user and base station. A theoretical analysis framework for the PSA is provided. First, by using the packet-oriented operation for the overlap packets when they conflict in a slot interval, we introduce the packet-based polarization transform and prove that this transform is independent of the packet's length. Second, guided by the packet-based polarization, an SP assignment (SPA) method with the variable slot erasure probability (SEP) and a SPA method with a fixed SEP value are designed for the PSA scheme. Then, a packet-oriented successive cancellation (pSC) and a pSC list (pSCL) decoding algorithm are developed. Simultaneously, the finite-slots throughput bounds and the asymptotic throughput for the pSC algorithm are analyzed. The simulation results show that the proposed PSA scheme can achieve an improved throughput with the pSC/SCL decoding algorithm over the traditional repetition slotted ALOHA scheme. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 165,697 |
1504.06785 | Complete Dictionary Recovery over the Sphere | We consider the problem of recovering a complete (i.e., square and invertible) matrix $\mathbf A_0$, from $\mathbf Y \in \mathbb R^{n \times p}$ with $\mathbf Y = \mathbf A_0 \mathbf X_0$, provided $\mathbf X_0$ is sufficiently sparse. This recovery problem is central to the theoretical understanding of dictionary learning, which seeks a sparse representation for a collection of input signals, and finds numerous applications in modern signal processing and machine learning. We give the first efficient algorithm that provably recovers $\mathbf A_0$ when $\mathbf X_0$ has $O(n)$ nonzeros per column, under suitable probability model for $\mathbf X_0$. In contrast, prior results based on efficient algorithms provide recovery guarantees when $\mathbf X_0$ has only $O(n^{1-\delta})$ nonzeros per column for any constant $\delta \in (0, 1)$. Our algorithmic pipeline centers around solving a certain nonconvex optimization problem with a spherical constraint, and hence is naturally phrased in the language of manifold optimization. To show this apparently hard problem is tractable, we first provide a geometric characterization of the high-dimensional objective landscape, which shows that with high probability there are no "spurious" local minima. This particular geometric structure allows us to design a Riemannian trust region algorithm over the sphere that provably converges to one local minimizer with an arbitrary initialization, despite the presence of saddle points. The geometric approach we develop here may also shed light on other problems arising from nonconvex recovery of structured signals. | false | false | false | false | false | false | true | false | false | true | false | true | false | false | false | false | false | false | 42,453 |
1009.1889 | Spatially regularized compressed sensing of diffusion MRI data | The present paper introduces a method for substantial reduction of the number of diffusion encoding gradients required for reliable reconstruction of HARDI signals. The method exploits the theory of compressed sensing (CS), which establishes conditions on which a signal of interest can be recovered from its under-sampled measurements, provided that the signal admits a sparse representation in the domain of a linear transform. In the case at hand, the latter is defined to be spherical ridgelet transformation, which excels in sparsifying HARDI signals. What makes the resulting reconstruction procedure even more accurate is a combination of the sparsity constraints in the diffusion domain with additional constraints imposed on the estimated diffusion field in the spatial domain. Accordingly, the present paper describes a novel way to combine the diffusion- and spatial-domain constraints to achieve a maximal reduction in the number of diffusion measurements, while sacrificing little in terms of reconstruction accuracy. Finally, details are provided on a particularly efficient numerical scheme which can be used to solve the aforementioned reconstruction problem by means of standard and readily available estimation tools. The paper is concluded with experimental results which support the practical value of the proposed reconstruction methodology. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 7,519 |
1310.4761 | Towards Energy Neutrality in Energy Harvesting Wireless Sensor Networks:
A Case for Distributed Compressive Sensing? | This paper advocates the use of the emerging distributed compressive sensing (DCS) paradigm in order to deploy energy harvesting (EH) wireless sensor networks (WSN) with practical network lifetime and data gathering rates that are substantially higher than the state-of-the-art. In particular, we argue that there are two fundamental mechanisms in an EH WSN: i) the energy diversity associated with the EH process that entails that the harvested energy can vary from sensor node to sensor node, and ii) the sensing diversity associated with the DCS process that entails that the energy consumption can also vary across the sensor nodes without compromising data recovery. We also argue that such mechanisms offer the means to match closely the energy demand to the energy supply in order to unlock the possibility for energy-neutral WSNs that leverage EH capability. A number of analytic and simulation results are presented in order to illustrate the potential of the approach. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 27,839 |
2303.09686 | Effect of Haptic Assistance Strategy on Mental Engagement in Fine Motor
Tasks | This study investigates the effect of haptic control strategies on a subject's mental engagement during a fine motor handwriting rehabilitation task. The considered control strategies include an error-reduction (ER) and an error-augmentation (EA), which are tested on both dominant and non-dominant hand. A non-invasive brain-computer interface is used to monitor the electroencephalogram (EEG) activities of the subjects and evaluate the subject's mental engagement using the power of multiple frequency bands (theta, alpha, and beta). Statistical analysis of the effect of the control strategy on mental engagement revealed that the choice of the haptic control strategy has a significant effect (p < 0.001) on mental engagement depending on the type of hand (dominant or non-dominant). Among the evaluated strategies, EA is shown to be more mentally engaging when compared with the ER under the non-dominant hand. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 352,145 |
1203.0061 | ReStore: Reusing Results of MapReduce Jobs | Analyzing large scale data has emerged as an important activity for many organizations in the past few years. This large scale data analysis is facilitated by the MapReduce programming and execution model and its implementations, most notably Hadoop. Users of MapReduce often have analysis tasks that are too complex to express as individual MapReduce jobs. Instead, they use high-level query languages such as Pig, Hive, or Jaql to express their complex tasks. The compilers of these languages translate queries into workflows of MapReduce jobs. Each job in these workflows reads its input from the distributed file system used by the MapReduce system and produces output that is stored in this distributed file system and read as input by the next job in the workflow. The current practice is to delete these intermediate results from the distributed file system at the end of executing the workflow. One way to improve the performance of workflows of MapReduce jobs is to keep these intermediate results and reuse them for future workflows submitted to the system. In this paper, we present ReStore, a system that manages the storage and reuse of such intermediate results. ReStore can reuse the output of whole MapReduce jobs that are part of a workflow, and it can also create additional reuse opportunities by materializing and storing the output of query execution operators that are executed within a MapReduce job. We have implemented ReStore as an extension to the Pig dataflow system on top of Hadoop, and we experimentally demonstrate significant speedups on queries from the PigMix benchmark. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 14,670 |
1910.06575 | Aligning Cross-Lingual Entities with Multi-Aspect Information | Multilingual knowledge graphs (KGs), such as YAGO and DBpedia, represent entities in different languages. The task of cross-lingual entity alignment is to match entities in a source language with their counterparts in target languages. In this work, we investigate embedding-based approaches to encode entities from multilingual KGs into the same vector space, where equivalent entities are close to each other. Specifically, we apply graph convolutional networks (GCNs) to combine multi-aspect information of entities, including topological connections, relations, and attributes of entities, to learn entity embeddings. To exploit the literal descriptions of entities expressed in different languages, we propose two uses of a pretrained multilingual BERT model to bridge cross-lingual gaps. We further propose two strategies to integrate GCN-based and BERT-based modules to boost performance. Extensive experiments on two benchmark datasets demonstrate that our method significantly outperforms existing systems. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 149,383 |
2303.12910 | Cross-Layer Design for AI Acceleration with Non-Coherent Optical
Computing | Emerging AI applications such as ChatGPT, graph convolutional networks, and other deep neural networks require massive computational resources for training and inference. Contemporary computing platforms such as CPUs, GPUs, and TPUs are struggling to keep up with the demands of these AI applications. Non-coherent optical computing represents a promising approach for light-speed acceleration of AI workloads. In this paper, we show how cross-layer design can overcome challenges in non-coherent optical computing platforms. We describe approaches for optical device engineering, tuning circuit enhancements, and architectural innovations to adapt optical computing to a variety of AI workloads. We also discuss techniques for hardware/software co-design that can intelligently map and adapt AI software to improve its performance on non-coherent optical computing platforms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 353,453 |
2403.03205 | Finding Super-spreaders in Network Cascades | Suppose that a cascade (e.g., an epidemic) spreads on an unknown graph, and only the infection times of vertices are observed. What can be learned about the graph from the infection times caused by multiple distinct cascades? Most of the literature on this topic focuses on the task of recovering the entire graph, which requires $\Omega ( \log n)$ cascades for an $n$-vertex bounded degree graph. Here we ask a different question: can the important parts of the graph be estimated from just a few (i.e., constant number) of cascades, even as $n$ grows large? In this work, we focus on identifying super-spreaders (i.e., high-degree vertices) from infection times caused by a Susceptible-Infected process on a graph. Our first main result shows that vertices of degree greater than $n^{3/4}$ can indeed be estimated from a constant number of cascades. Our algorithm for doing so leverages a novel connection between vertex degrees and the second derivative of the cumulative infection curve. Conversely, we show that estimating vertices of degree smaller than $n^{1/2}$ requires at least $\log(n) / \log \log (n)$ cascades. Surprisingly, this matches (up to $\log \log n$ factors) the number of cascades needed to learn the \emph{entire} graph if it is a tree. | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 435,105 |
1006.5445 | Technical Report: MIMO B-MAC Interference Network Optimization under
Rate Constraints by Polite Water-filling and Duality | We take two new approaches to design efficient algorithms for transmitter optimization under rate constraints to guarantee the Quality of Service in general MIMO interference networks, named B-MAC Networks, which is a combination of multiple interfering broadcast channels (BC) and multiaccess channels (MAC). Two related optimization problems, maximizing the minimum of weighted rates under a sum-power constraint and minimizing the sum-power under rate constraints, are considered. The first approach takes advantage of existing efficient algorithms for SINR problems by building a bridge between rate and SINR through the design of optimal mappings between them so that the problems can be converted to SINR constraint problems. The approach can be applied to other optimization problems as well. The second approach employs polite water-filling, which is the optimal network version of water-filling that we recently found. It replaces almost all generic optimization algorithms currently used for networks and reduces the complexity while demonstrating superior performance even in non-convex cases. Both centralized and distributed algorithms are designed and the performance is analyzed in addition to numeric examples. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 6,914 |
2005.11894 | Update Bandwidth for Distributed Storage | In this paper, we consider the update bandwidth in distributed storage systems~(DSSs). The update bandwidth, which measures the transmission efficiency of the update process in DSSs, is defined as the total amount of data symbols transferred in the network when the data symbols stored in a node are updated. This paper contains the following contributions. First, we establish the closed-form expression of the minimum update bandwidth attainable by irregular array codes. Second, after defining a class of irregular array codes, called Minimum Update Bandwidth~(MUB) codes, which achieve the minimum update bandwidth of irregular array codes, we determine the smallest code redundancy attainable by MUB codes. Third, the code parameters, with which the minimum code redundancy of irregular array codes and the smallest code redundancy of MUB codes can be equal, are identified, which allows us to define MR-MUB codes as a class of irregular array codes that simultaneously achieve the minimum code redundancy and the minimum update bandwidth. Fourth, we introduce explicit code constructions of MR-MUB codes and MUB codes with the smallest code redundancy. Fifth, we establish a lower bound of the update complexity of MR-MUB codes, which can be used to prove that the minimum update complexity of irregular array codes may not be achieved by MR-MUB codes. Last, we construct a class of $(n = k + 2, k)$ vertical maximum-distance separable (MDS) array codes that can achieve all of the minimum code redundancy, the minimum update bandwidth and the optimal repair bandwidth of irregular array codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 178,598 |
2009.09841 | Finding Influential Instances for Distantly Supervised Relation
Extraction | Distant supervision (DS) is a strong way to expand the datasets for enhancing relation extraction (RE) models but often suffers from high label noise. Current works based on attention, reinforcement learning, or GAN are black-box models so they neither provide meaningful interpretation of sample selection in DS nor stability on different domains. On the contrary, this work proposes a novel model-agnostic instance sampling method for DS by influence function (IF), namely REIF. Our method identifies favorable/unfavorable instances in the bag based on IF, then does dynamic instance sampling. We design a fast influence sampling algorithm that reduces the computational complexity from $\mathcal{O}(mn)$ to $\mathcal{O}(1)$, with analyzing its robustness on the selected sampling function. Experiments show that by simply sampling the favorable instances during training, REIF is able to win over a series of baselines that have complicated architectures. We also demonstrate that REIF can support interpretable instance selection. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 196,709 |
2404.02342 | A Computational Analysis of Lyric Similarity Perception | In musical compositions that include vocals, lyrics significantly contribute to artistic expression. Consequently, previous studies have introduced the concept of a recommendation system that suggests lyrics similar to a user's favorites or personalized preferences, aiding in the discovery of lyrics among millions of tracks. However, many of these systems do not fully consider human perceptions of lyric similarity, primarily due to limited research in this area. To bridge this gap, we conducted a comparative analysis of computational methods for modeling lyric similarity with human perception. Results indicated that computational models based on similarities between embeddings from pre-trained BERT-based models, the audio from which the lyrics are derived, and phonetic components are indicative of perceptual lyric similarity. This finding underscores the importance of semantic, stylistic, and phonetic similarities in human perception about lyric similarity. We anticipate that our findings will enhance the development of similarity-based lyric recommendation systems by offering pseudo-labels for neural network development and introducing objective evaluation metrics. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 443,814 |
2303.14822 | MGTBench: Benchmarking Machine-Generated Text Detection | Nowadays, powerful large language models (LLMs) such as ChatGPT have demonstrated revolutionary power in a variety of tasks. Consequently, the detection of machine-generated texts (MGTs) is becoming increasingly crucial as LLMs become more advanced and prevalent. These models have the ability to generate human-like language, making it challenging to discern whether a text is authored by a human or a machine. This raises concerns regarding authenticity, accountability, and potential bias. However, existing methods for detecting MGTs are evaluated using different model architectures, datasets, and experimental settings, resulting in a lack of a comprehensive evaluation framework that encompasses various methodologies. Furthermore, it remains unclear how existing detection methods would perform against powerful LLMs. In this paper, we fill this gap by proposing the first benchmark framework for MGT detection against powerful LLMs, named MGTBench. Extensive evaluations on public datasets with curated texts generated by various powerful LLMs such as ChatGPT-turbo and Claude demonstrate the effectiveness of different detection methods. Our ablation study shows that a larger number of words in general leads to better performance and most detection methods can achieve similar performance with much fewer training samples. Moreover, we delve into a more challenging task: text attribution. Our findings indicate that the model-based detection methods still perform well in the text attribution task. To investigate the robustness of different detection methods, we consider three adversarial attacks, namely paraphrasing, random spacing, and adversarial perturbations. We discover that these attacks can significantly diminish detection effectiveness, underscoring the critical need for the development of more robust detection methods. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 354,256 |
2304.10180 | Cyber Security in Smart Manufacturing (Threats, Landscapes Challenges) | Industry 4.0 is a blend of the hyper-connected digital industry within two world of Information Technology (IT) and Operational Technology (OT). With this amalgamate opportunity, smart manufacturing involves production assets with the manufacturing equipment having its own intelligence, while the system-wide intelligence is provided by the cyber layer. However Smart manufacturing now becomes one of the prime targets of cyber threats due to vulnerabilities in the existing process of operation. Since smart manufacturing covers a vast area of production industries from cyber physical system to additive manufacturing, to autonomous vehicles, to cloud based IIoT (Industrial IoT), to robotic production, cyber threat stands out with this regard questioning about how to connect manufacturing resources by network, how to integrate a whole process chain for a factory production etc. Cybersecurity confidentiality, integrity and availability expose their essential existence for the proper operational thread model known as digital thread ensuring secure manufacturing. In this work, a literature survey is presented from the existing threat models, attack vectors and future challenges over the digital thread of smart manufacturing. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 359,321 |
2412.06507 | BATseg: Boundary-aware Multiclass Spinal Cord Tumor Segmentation on 3D
MRI Scans | Spinal cord tumors significantly contribute to neurological morbidity and mortality. Precise morphometric quantification, encompassing the size, location, and type of such tumors, holds promise for optimizing treatment planning strategies. Although recent methods have demonstrated excellent performance in medical image segmentation, they primarily focus on discerning shapes with relatively large morphology such as brain tumors, ignoring the challenging problem of identifying spinal cord tumors which tend to have tiny sizes, diverse locations, and shapes. To tackle this hard problem of multiclass spinal cord tumor segmentation, we propose a new method, called BATseg, to learn a tumor surface distance field by applying our new multiclass boundary-aware loss function. To verify the effectiveness of our approach, we also introduce the first and large-scale spinal cord tumor dataset. It comprises gadolinium-enhanced T1-weighted 3D MRI scans from 653 patients and contains the four most common spinal cord tumor types: astrocytomas, ependymomas, hemangioblastomas, and spinal meningiomas. Extensive experiments on our dataset and another public kidney tumor segmentation dataset show that our proposed method achieves superior performance for multiclass tumor segmentation. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 515,272 |
2209.01373 | TogetherNet: Bridging Image Restoration and Object Detection Together
via Dynamic Enhancement Learning | Adverse weather conditions such as haze, rain, and snow often impair the quality of captured images, causing detection networks trained on normal images to generalize poorly in these scenarios. In this paper, we raise an intriguing question - if the combination of image restoration and object detection, can boost the performance of cutting-edge detectors in adverse weather conditions. To answer it, we propose an effective yet unified detection paradigm that bridges these two subtasks together via dynamic enhancement learning to discern objects in adverse weather conditions, called TogetherNet. Different from existing efforts that intuitively apply image dehazing/deraining as a pre-processing step, TogetherNet considers a multi-task joint learning problem. Following the joint learning scheme, clean features produced by the restoration network can be shared to learn better object detection in the detection network, thus helping TogetherNet enhance the detection capacity in adverse weather conditions. Besides the joint learning architecture, we design a new Dynamic Transformer Feature Enhancement module to improve the feature extraction and representation capabilities of TogetherNet. Extensive experiments on both synthetic and real-world datasets demonstrate that our TogetherNet outperforms the state-of-the-art detection approaches by a large margin both quantitatively and qualitatively. Source code is available at https://github.com/yz-wang/TogetherNet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 315,864 |
1212.2860 | Pituitary Adenoma Volumetry with 3D Slicer | In this study, we present pituitary adenoma volumetry using the free and open source medical image computing platform for biomedical research: (3D) Slicer. Volumetric changes in cerebral pathologies like pituitary adenomas are a critical factor in treatment decisions by physicians and in general the volume is acquired manually. Therefore, manual slice-by-slice segmentations in magnetic resonance imaging (MRI) data, which have been obtained at regular intervals, are performed. In contrast to this manual time consuming slice-by-slice segmentation process Slicer is an alternative which can be significantly faster and less user intensive. In this contribution, we compare pure manual segmentations of ten pituitary adenomas with semi-automatic segmentations under Slicer. Thus, physicians drew the boundaries completely manually on a slice-by-slice basis and performed a Slicer-enhanced segmentation using the competitive region-growing based module of Slicer named GrowCut. Results showed that the time and user effort required for GrowCut-based segmentations were on average about thirty percent less than the pure manual segmentations. Furthermore, we calculated the Dice Similarity Coefficient (DSC) between the manual and the Slicer-based segmentations to proof that the two are comparable yielding an average DSC of 81.97\pm3.39%. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 20,352 |
2109.12998 | Algebraic Semantics of Generalized RIFs | A number of numeric measures like rough inclusion functions (RIFs) are used in general rough sets and soft computing. But these are often intrusive by definition, and amount to making unjustified assumptions about the data. The contamination problem is also about recognizing the domains of discourses involved in this, specifying errors and reducing data intrusion relative to them. In this research, weak quasi rough inclusion functions (wqRIFs) are generalized to general granular operator spaces with scope for limiting contamination. New algebraic operations are defined over collections of such functions, and are studied by the present author. It is shown by her that the algebras formed by the generalized wqRIFs are ordered hemirings with additional operators. By contrast, the generalized rough inclusion functions lack similar structure. This potentially contributes to improving the selection (possibly automatic) of such functions, training methods, and reducing contamination (and data intrusion) in applications. The underlying framework and associated concepts are explained in some detail, as they are relatively new. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 257,486 |
1108.5567 | Parsing Combinatory Categorial Grammar with Answer Set Programming:
Preliminary Report | Combinatory categorial grammar (CCG) is a grammar formalism used for natural language parsing. CCG assigns structured lexical categories to words and uses a small set of combinatory rules to combine these categories to parse a sentence. In this work we propose and implement a new approach to CCG parsing that relies on a prominent knowledge representation formalism, answer set programming (ASP) - a declarative programming paradigm. We formulate the task of CCG parsing as a planning problem and use an ASP computational tool to compute solutions that correspond to valid parses. Compared to other approaches, there is no need to implement a specific parsing algorithm using such a declarative method. Our approach aims at producing all semantically distinct parse trees for a given sentence. From this goal, normalization and efficiency issues arise, and we deal with them by combining and extending existing strategies. We have implemented a CCG parsing tool kit - AspCcgTk - that uses ASP as its main computational means. The C&C supertagger can be used as a preprocessor within AspCcgTk, which allows us to achieve wide-coverage natural language parsing. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 11,844 |
2107.05815 | Safety and progress proofs for a reactive planner and controller for
autonomous driving | In this paper, we perform safety and performance analysis of an autonomous vehicle that implements reactive planner and controller for navigating a race lap. Unlike traditional planning algorithms that have access to a map of the environment, reactive planner generates the plan purely based on the current input from sensors. Our reactive planner selects a waypoint on the local Voronoi diagram and we use a pure-pursuit controller to navigate towards the waypoint. Our safety and performance analysis has two parts. The first part demonstrates that the reactive planner computes a plan that is locally consistent with the Voronoi plan computed with full map. The second part involves modeling of the evolution of vehicle navigating along the Voronoi diagram as a hybrid automata. For proving the safety and performance specification, we compute the reachable set of this hybrid automata and employ some enhancements that make this computation easier. We demonstrate that an autonomous vehicle implementing our reactive planner and controller is safe and successfully completes a lap for five different circuits. In addition, we have implemented our planner and controller in a simulation environment as well as a scaled down autonomous vehicle and demonstrate that our planner works well for a wide variety of circuits. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 245,894 |
2012.06555 | OPAC: Opportunistic Actor-Critic | Actor-critic methods, a type of model-free reinforcement learning (RL), have achieved state-of-the-art performances in many real-world domains in continuous control. Despite their success, the wide-scale deployment of these models is still a far cry. The main problems in these actor-critic methods are inefficient exploration and sub-optimal policies. Soft Actor-Critic (SAC) and Twin Delayed Deep Deterministic Policy Gradient (TD3), two cutting edge such algorithms, suffer from these issues. SAC effectively addressed the problems of sample complexity and convergence brittleness to hyper-parameters and thus outperformed all state-of-the-art algorithms including TD3 in harder tasks, whereas TD3 produced moderate results in all environments. SAC suffers from inefficient exploration owing to the Gaussian nature of its policy which causes borderline performance in simpler tasks. In this paper, we introduce Opportunistic Actor-Critic (OPAC), a novel model-free deep RL algorithm that employs better exploration policy and lesser variance. OPAC combines some of the most powerful features of TD3 and SAC and aims to optimize a stochastic policy in an off-policy way. For calculating the target Q-values, instead of two critics, OPAC uses three critics and based on the environment complexity, opportunistically chooses how the target Q-value is computed from the critics' evaluation. We have systematically evaluated the algorithm on MuJoCo environments where it achieves state-of-the-art performance and outperforms or at least equals the performance of TD3 and SAC. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 211,153 |
2110.14864 | Selective Sampling for Online Best-arm Identification | This work considers the problem of selective-sampling for best-arm identification. Given a set of potential options $\mathcal{Z}\subset\mathbb{R}^d$, a learner aims to compute with probability greater than $1-\delta$, $\arg\max_{z\in \mathcal{Z}} z^{\top}\theta_{\ast}$ where $\theta_{\ast}$ is unknown. At each time step, a potential measurement $x_t\in \mathcal{X}\subset\mathbb{R}^d$ is drawn IID and the learner can either choose to take the measurement, in which case they observe a noisy measurement of $x^{\top}\theta_{\ast}$, or to abstain from taking the measurement and wait for a potentially more informative point to arrive in the stream. Hence the learner faces a fundamental trade-off between the number of labeled samples they take and when they have collected enough evidence to declare the best arm and stop sampling. The main results of this work precisely characterize this trade-off between labeled samples and stopping time and provide an algorithm that nearly-optimally achieves the minimal label complexity given a desired stopping time. In addition, we show that the optimal decision rule has a simple geometric form based on deciding whether a point is in an ellipse or not. Finally, our framework is general enough to capture binary classification improving upon previous works. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 263,671 |
2410.03234 | Showing LLM-Generated Code Selectively Based on Confidence of LLMs | Large Language Models (LLMs) have shown impressive abilities in code generation, but they may generate erroneous programs. Reading a program takes ten times longer than writing it. Showing these erroneous programs to developers will waste developers' energies and introduce security risks to software. To address the above limitations, we propose HonestCoder, a novel LLM-based code generation approach. HonestCoder selectively shows the generated programs to developers based on LLMs' confidence. The confidence provides valuable insights into the correctness of generated programs. To achieve this goal, we propose a novel approach to estimate LLMs' confidence in code generation. It estimates confidence by measuring the multi-modal similarity between LLMs-generated programs. We collect and release a multilingual benchmark named TruthCodeBench, which consists of 2,265 samples and covers two popular programming languages (i.e., Python and Java). We apply HonestCoder to four popular LLMs (e.g., DeepSeek-Coder and Code Llama) and evaluate it on TruthCodeBench. Based on the experiments, we obtain the following insights. (1) HonestCoder can effectively estimate LLMs' confidence and accurately determine the correctness of generated programs. For example, HonestCoder outperforms the state-of-the-art baseline by 27.79% in AUROC and 63.74% in AUCPR. (2) HonestCoder can decrease the number of erroneous programs shown to developers. Compared to eight baselines, it can show more correct programs and fewer erroneous programs to developers. (3) Compared to showing code indiscriminately, HonestCoder only adds slight time overhead (approximately 0.4 seconds per requirement). (4) We discuss future directions to facilitate the application of LLMs in software development. We hope this work can motivate broad discussions about measuring the reliability of LLMs' outputs in performing code-related tasks. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | 494,693 |
1803.07950 | End-to-End Video Captioning with Multitask Reinforcement Learning | Although end-to-end (E2E) learning has led to impressive progress on a variety of visual understanding tasks, it is often impeded by hardware constraints (e.g., GPU memory) and is prone to overfitting. When it comes to video captioning, one of the most challenging benchmark tasks in computer vision, those limitations of E2E learning are especially amplified by the fact that both the input videos and output captions are lengthy sequences. Indeed, state-of-the-art methods for video captioning process video frames by convolutional neural networks and generate captions by unrolling recurrent neural networks. If we connect them in an E2E manner, the resulting model is both memory-consuming and data-hungry, making it extremely hard to train. In this paper, we propose a multitask reinforcement learning approach to training an E2E video captioning model. The main idea is to mine and construct as many effective tasks (e.g., attributes, rewards, and the captions) as possible from the human captioned videos such that they can jointly regulate the search space of the E2E neural network, from which an E2E video captioning model can be found and generalized to the testing phase. To the best of our knowledge, this is the first video captioning model that is trained end-to-end from the raw video input to the caption output. Experimental results show that such a model outperforms existing ones to a large margin on two benchmark video captioning datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 93,161 |
2304.03990 | Block-regularized 5$\times$2 Cross-validated McNemar's Test for
Comparing Two Classification Algorithms | In the task of comparing two classification algorithms, the widely-used McNemar's test aims to infer the presence of a significant difference between the error rates of the two classification algorithms. However, the power of the conventional McNemar's test is usually unpromising because the hold-out (HO) method in the test merely uses a single train-validation split that usually produces a highly varied estimation of the error rates. In contrast, a cross-validation (CV) method repeats the HO method in multiple times and produces a stable estimation. Therefore, a CV method has a great advantage to improve the power of McNemar's test. Among all types of CV methods, a block-regularized 5$\times$2 CV (BCV) has been shown in many previous studies to be superior to the other CV methods in the comparison task of algorithms because the 5$\times$2 BCV can produce a high-quality estimator of the error rate by regularizing the numbers of overlapping records between all training sets. In this study, we compress the 10 correlated contingency tables in the 5$\times$2 BCV to form an effective contingency table. Then, we define a 5$\times$2 BCV McNemar's test on the basis of the effective contingency table. We demonstrate the reasonable type I error and the promising power of the proposed 5$\times$2 BCV McNemar's test on multiple simulated and real-world data sets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 357,023 |
2405.04219 | Iterative Experience Refinement of Software-Developing Agents | Autonomous agents powered by large language models (LLMs) show significant potential for achieving high autonomy in various scenarios such as software development. Recent research has shown that LLM agents can leverage past experiences to reduce errors and enhance efficiency. However, the static experience paradigm, reliant on a fixed collection of past experiences acquired heuristically, lacks iterative refinement and thus hampers agents' adaptability. In this paper, we introduce the Iterative Experience Refinement framework, enabling LLM agents to refine experiences iteratively during task execution. We propose two fundamental patterns: the successive pattern, refining based on nearest experiences within a task batch, and the cumulative pattern, acquiring experiences across all previous task batches. Augmented with our heuristic experience elimination, the method prioritizes high-quality and frequently-used experiences, effectively managing the experience space and enhancing efficiency. Extensive experiments show that while the successive pattern may yield superior results, the cumulative pattern provides more stable performance. Moreover, experience elimination facilitates achieving better performance using just 11.54% of a high-quality subset. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | true | false | false | true | 452,489 |
1703.07491 | SUM: Sequential Scene Understanding and Manipulation | In order to perform autonomous sequential manipulation tasks, perception in cluttered scenes remains a critical challenge for robots. In this paper, we propose a probabilistic approach for robust sequential scene estimation and manipulation - Sequential Scene Understanding and Manipulation(SUM). SUM considers uncertainty due to discriminative object detection and recognition in the generative estimation of the most likely object poses maintained over time to achieve a robust estimation of the scene under heavy occlusions and unstructured environment. Our method utilizes candidates from discriminative object detector and recognizer to guide the generative process of sampling scene hypothesis, and each scene hypotheses is evaluated against the observations. Also SUM maintains beliefs of scene hypothesis over robot physical actions for better estimation and against noisy detections. We conduct extensive experiments to show that our approach is able to perform robust estimation and manipulation. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 70,406 |
2211.13952 | Contextual Decision-Making with Knapsacks Beyond the Worst Case | We study the framework of a dynamic decision-making scenario with resource constraints. In this framework, an agent, whose target is to maximize the total reward under the initial inventory, selects an action in each round upon observing a random request, leading to a reward and resource consumptions that are further associated with an unknown random external factor. While previous research has already established an $\widetilde{O}(\sqrt{T})$ worst-case regret for this problem, this work offers two results that go beyond the worst-case perspective: one for the worst-case gap between benchmarks and another for logarithmic regret rates. We first show that an $\Omega(\sqrt{T})$ distance between the commonly used fluid benchmark and the online optimum is unavoidable when the former has a degenerate optimal solution. On the algorithmic side, we merge the re-solving heuristic with distribution estimation skills and propose an algorithm that achieves an $\widetilde{O}(1)$ regret as long as the fluid LP has a unique and non-degenerate solution. Furthermore, we prove that our algorithm maintains a near-optimal $\widetilde{O}(\sqrt{T})$ regret even in the worst cases and extend these results to the setting where the request and external factor are continuous. Regarding information structure, our regret results are obtained under two feedback models, respectively, where the algorithm accesses the external factor at the end of each round and at the end of a round only when a non-null action is executed. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 332,663 |
2312.10947 | LabelCraft: Empowering Short Video Recommendations with Automated Label
Crafting | Short video recommendations often face limitations due to the quality of user feedback, which may not accurately depict user interests. To tackle this challenge, a new task has emerged: generating more dependable labels from original feedback. Existing label generation methods rely on manual rules, demanding substantial human effort and potentially misaligning with the desired objectives of the platform. To transcend these constraints, we introduce LabelCraft, a novel automated label generation method explicitly optimizing pivotal operational metrics for platform success. By formulating label generation as a higher-level optimization problem above recommender model optimization, LabelCraft introduces a trainable labeling model for automatic label mechanism modeling. Through meta-learning techniques, LabelCraft effectively addresses the bi-level optimization hurdle posed by the recommender and labeling models, enabling the automatic acquisition of intricate label generation mechanisms. Extensive experiments on real-world datasets corroborate LabelCraft's excellence across varied operational metrics, encompassing usage time, user engagement, and retention. Codes are available at https://github.com/baiyimeng/LabelCraft. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 416,381 |
1405.4951 | Secure Friend Discovery via Privacy-Preserving and Decentralized
Community Detection | The problem of secure friend discovery on a social network has long been proposed and studied. The requirement is that a pair of nodes can make befriending decisions with minimum information exposed to the other party. In this paper, we propose to use community detection to tackle the problem of secure friend discovery. We formulate the first privacy-preserving and decentralized community detection problem as a multi-objective optimization. We design the first protocol to solve this problem, which transforms community detection to a series of Private Set Intersection (PSI) instances using Truncated Random Walk (TRW). Preliminary theoretical results show that our protocol can uncover communities with overwhelming probability and preserve privacy. We also discuss future works, potential extensions and variations. | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | 33,223 |
2109.01504 | Ghost Loss to Question the Reliability of Training Data | Supervised image classification problems rely on training data assumed to have been correctly annotated; this assumption underpins most works in the field of deep learning. In consequence, during its training, a network is forced to match the label provided by the annotator and is not given the flexibility to choose an alternative to inconsistencies that it might be able to detect. Therefore, erroneously labeled training images may end up ``correctly'' classified in classes which they do not actually belong to. This may reduce the performances of the network and thus incite to build more complex networks without even checking the quality of the training data. In this work, we question the reliability of the annotated datasets. For that purpose, we introduce the notion of ghost loss, which can be seen as a regular loss that is zeroed out for some predicted values in a deterministic way and that allows the network to choose an alternative to the given label without being penalized. After a proof of concept experiment, we use the ghost loss principle to detect confusing images and erroneously labeled images in well-known training datasets (MNIST, Fashion-MNIST, SVHN, CIFAR10) and we provide a new tool, called sanity matrix, for summarizing these confusions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 253,451 |
1906.03049 | Computing Tight Differential Privacy Guarantees Using FFT | Differentially private (DP) machine learning has recently become popular. The privacy loss of DP algorithms is commonly reported using $(\varepsilon,\delta)$-DP. In this paper, we propose a numerical accountant for evaluating the privacy loss for algorithms with continuous one dimensional output. This accountant can be applied to the subsampled multidimensional Gaussian mechanism which underlies the popular DP stochastic gradient descent. The proposed method is based on a numerical approximation of an integral formula which gives the exact $(\varepsilon,\delta)$-values. The approximation is carried out by discretising the integral and by evaluating discrete convolutions using the fast Fourier transform algorithm. We give both theoretical error bounds and numerical error estimates for the approximation. Experimental comparisons with state-of-the-art techniques demonstrate significant improvements in bound tightness and/or computation time. Python code for the method can be found in Github (https://github.com/DPBayes/PLD-Accountant/). | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 134,265 |
1703.03454 | Sample Efficient Feature Selection for Factored MDPs | In reinforcement learning, the state of the real world is often represented by feature vectors. However, not all of the features may be pertinent for solving the current task. We propose Feature Selection Explore and Exploit (FS-EE), an algorithm that automatically selects the necessary features while learning a Factored Markov Decision Process, and prove that under mild assumptions, its sample complexity scales with the in-degree of the dynamics of just the necessary features, rather than the in-degree of all features. This can result in a much better sample complexity when the in-degree of the necessary features is smaller than the in-degree of all features. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 69,733 |
1307.2541 | Geospatial Narratives and their Spatio-Temporal Dynamics: Commonsense
Reasoning for High-level Analyses in Geographic Information Systems | The modelling, analysis, and visualisation of dynamic geospatial phenomena has been identified as a key developmental challenge for next-generation Geographic Information Systems (GIS). In this context, the envisaged paradigmatic extensions to contemporary foundational GIS technology raises fundamental questions concerning the ontological, formal representational, and (analytical) computational methods that would underlie their spatial information theoretic underpinnings. We present the conceptual overview and architecture for the development of high-level semantic and qualitative analytical capabilities for dynamic geospatial domains. Building on formal methods in the areas of commonsense reasoning, qualitative reasoning, spatial and temporal representation and reasoning, reasoning about actions and change, and computational models of narrative, we identify concrete theoretical and practical challenges that accrue in the context of formal reasoning about `space, events, actions, and change'. With this as a basis, and within the backdrop of an illustrated scenario involving the spatio-temporal dynamics of urban narratives, we address specific problems and solutions techniques chiefly involving `qualitative abstraction', `data integration and spatial consistency', and `practical geospatial abduction'. From a broad topical viewpoint, we propose that next-generation dynamic GIS technology demands a transdisciplinary scientific perspective that brings together Geography, Artificial Intelligence, and Cognitive Science. Keywords: artificial intelligence; cognitive systems; human-computer interaction; geographic information systems; spatio-temporal dynamics; computational models of narrative; geospatial analysis; geospatial modelling; ontology; qualitative spatial modelling and reasoning; spatial assistance systems | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 25,721 |
2306.07895 | A dual number formulation to efficiently compute higher order
directional derivatives | This contribution proposes a new formulation to efficiently compute directional derivatives of order one to fourth. The formulation is based on automatic differentiation implemented with dual numbers. Directional derivatives are particular cases of symmetric multilinear forms; therefore, using their symmetric properties and their coordinate representation, we implement functions to calculate mixed partial derivatives. Moreover, with directional derivatives, we deduce concise formulas for the velocity, acceleration, jerk, and jounce/snap vectors. The utility of our formulation is proved with three examples. The first example presents a comparison against the forward mode of finite differences to compute the fourth-order directional derivative of a scalar function. To this end, we have coded the finite differences method to calculate partial derivatives until the fourth order, to any order of approximation. The second example presents efficient computations of the velocity, acceleration, jerk, and jounce/snap. Finally, the third example is related to the computation of some partial derivatives. The implemented code of the proposed formulation and the finite differences method is proportioned as additional material to this article. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 373,182 |
1804.03578 | Towards Training Probabilistic Topic Models on Neuromorphic Multi-chip
Systems | Probabilistic topic models are popular unsupervised learning methods, including probabilistic latent semantic indexing (pLSI) and latent Dirichlet allocation (LDA). By now, their training is implemented on general purpose computers (GPCs), which are flexible in programming but energy-consuming. Towards low-energy implementations, this paper investigates their training on an emerging hardware technology called the neuromorphic multi-chip systems (NMSs). NMSs are very effective for a family of algorithms called spiking neural networks (SNNs). We present three SNNs to train topic models. The first SNN is a batch algorithm combining the conventional collapsed Gibbs sampling (CGS) algorithm and an inference SNN to train LDA. The other two SNNs are online algorithms targeting at both energy- and storage-limited environments. The two online algorithms are equivalent with training LDA by using maximum-a-posterior estimation and maximizing the semi-collapsed likelihood, respectively. They use novel, tailored ordinary differential equations for stochastic optimization. We simulate the new algorithms and show that they are comparable with the GPC algorithms, while being suitable for NMS implementation. We also propose an extension to train pLSI and a method to prune the network to obey the limited fan-in of some NMSs. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 94,654 |
2410.19626 | On warped product on information geometry: statistical manifolds and
statistical models | We study a differential geometric construction, the warped product, on the background geometry for information theory. Divergences, dual structures and symmetric 3-tensor are studied under this construction, and we show that warped product of manifolds endow with such structure also is endowed with the same geometric notion. However, warped product does not preserve canonical divergences, which in particular shows that warped product lacks of meaning in the information theory setting. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 502,390 |
2411.06116 | Supernotes: Driving Consensus in Crowd-Sourced Fact-Checking | X's Community Notes, a crowd-sourced fact-checking system, allows users to annotate potentially misleading posts. Notes rated as helpful by a diverse set of users are prominently displayed below the original post. While demonstrably effective at reducing misinformation's impact when notes are displayed, there is an opportunity for notes to appear on many more posts: for 91% of posts where at least one note is proposed, no notes ultimately achieve sufficient support from diverse users to be shown on the platform. This motivates the development of Supernotes: AI-generated notes that synthesize information from several existing community notes and are written to foster consensus among a diverse set of users. Our framework uses an LLM to generate many diverse Supernote candidates from existing proposed notes. These candidates are then evaluated by a novel scoring model, trained on millions of historical Community Notes ratings, selecting candidates that are most likely to be rated helpful by a diverse set of users. To test our framework, we ran a human subjects experiment in which we asked participants to compare the Supernotes generated by our framework to the best existing community notes for 100 sample posts. We found that participants rated the Supernotes as significantly more helpful, and when asked to choose between the two, preferred the Supernotes 75.2% of the time. Participants also rated the Supernotes more favorably than the best existing notes on quality, clarity, coverage, context, and argumentativeness. Finally, in a follow-up experiment, we asked participants to compare the Supernotes against LLM-generated summaries and found that the participants rated the Supernotes significantly more helpful, demonstrating that both the LLM-based candidate generation and the consensus-driven scoring play crucial roles in creating notes that effectively build consensus among diverse users. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 506,974 |
2006.10436 | Low-Rank Autoregressive Tensor Completion for Multivariate Time Series
Forecasting | Time series prediction has been a long-standing research topic and an essential application in many domains. Modern time series collected from sensor networks (e.g., energy consumption and traffic flow) are often large-scale and incomplete with considerable corruption and missing values, making it difficult to perform accurate predictions. In this paper, we propose a low-rank autoregressive tensor completion (LATC) framework to model multivariate time series data. The key of LATC is to transform the original multivariate time series matrix (e.g., sensor$\times$time point) to a third-order tensor structure (e.g., sensor$\times$time of day$\times$day) by introducing an additional temporal dimension, which allows us to model the inherent rhythms and seasonality of time series as global patterns. With the tensor structure, we can transform the time series prediction and missing data imputation problems into a universal low-rank tensor completion problem. Besides minimizing tensor rank, we also integrate a novel autoregressive norm on the original matrix representation into the objective function. The two components serve different roles. The low-rank structure allows us to effectively capture the global consistency and trends across all the three dimensions (i.e., similarity among sensors, similarity of different days, and current time v.s. the same time of historical days). The autoregressive norm can better model the local temporal trends. Our numerical experiments on three real-world data sets demonstrate the superiority of the integration of global and local trends in LATC in both missing data imputation and rolling prediction tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 182,885 |
2201.13019 | On the Robustness of Quality Measures for GANs | This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fr\'echet Inception Distance (FID). Analogous to the vulnerability of deep models against a variety of adversarial attacks, we show that such metrics can also be manipulated by additive pixel perturbations. Our experiments indicate that one can generate a distribution of images with very high scores but low perceptual quality. Conversely, one can optimize for small imperceptible perturbations that, when added to real world images, deteriorate their scores. We further extend our evaluation to generative models themselves, including the state of the art network StyleGANv2. We show the vulnerability of both the generative model and the FID against additive perturbations in the latent space. Finally, we show that the FID can be robustified by simply replacing the standard Inception with a robust Inception. We validate the effectiveness of the robustified metric through extensive experiments, showing it is more robust against manipulation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 277,863 |
1903.11752 | ThunderNet: Towards Real-time Generic Object Detection | Real-time generic object detection on mobile platforms is a crucial but challenging computer vision task. However, previous CNN-based detectors suffer from enormous computational cost, which hinders them from real-time inference in computation-constrained scenarios. In this paper, we investigate the effectiveness of two-stage detectors in real-time generic detection and propose a lightweight two-stage detector named ThunderNet. In the backbone part, we analyze the drawbacks in previous lightweight backbones and present a lightweight backbone designed for object detection. In the detection part, we exploit an extremely efficient RPN and detection head design. To generate more discriminative feature representation, we design two efficient architecture blocks, Context Enhancement Module and Spatial Attention Module. At last, we investigate the balance between the input resolution, the backbone, and the detection head. Compared with lightweight one-stage detectors, ThunderNet achieves superior performance with only 40% of the computational cost on PASCAL VOC and COCO benchmarks. Without bells and whistles, our model runs at 24.1 fps on an ARM-based device. To the best of our knowledge, this is the first real-time detector reported on ARM platforms. Our code and models are available at \url{https://github.com/qinzheng93/ThunderNet}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 125,575 |
2012.08668 | Mitigating Bias in Calibration Error Estimation | For an AI system to be reliable, the confidence it expresses in its decisions must match its accuracy. To assess the degree of match, examples are typically binned by confidence and the per-bin mean confidence and accuracy are compared. Most research in calibration focuses on techniques to reduce this empirical measure of calibration error, ECE_bin. We instead focus on assessing statistical bias in this empirical measure, and we identify better estimators. We propose a framework through which we can compute the bias of a particular estimator for an evaluation data set of a given size. The framework involves synthesizing model outputs that have the same statistics as common neural architectures on popular data sets. We find that binning-based estimators with bins of equal mass (number of instances) have lower bias than estimators with bins of equal width. Our results indicate two reliable calibration-error estimators: the debiased estimator (Brocker, 2012; Ferro and Fricker, 2012) and a method we propose, ECE_sweep, which uses equal-mass bins and chooses the number of bins to be as large as possible while preserving monotonicity in the calibration function. With these estimators, we observe improvements in the effectiveness of recalibration methods and in the detection of model miscalibration. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 211,824 |
2303.11329 | Sound Localization from Motion: Jointly Learning Sound Direction and
Camera Rotation | The images and sounds that we perceive undergo subtle but geometrically consistent changes as we rotate our heads. In this paper, we use these cues to solve a problem we call Sound Localization from Motion (SLfM): jointly estimating camera rotation and localizing sound sources. We learn to solve these tasks solely through self-supervision. A visual model predicts camera rotation from a pair of images, while an audio model predicts the direction of sound sources from binaural sounds. We train these models to generate predictions that agree with one another. At test time, the models can be deployed independently. To obtain a feature representation that is well-suited to solving this challenging problem, we also propose a method for learning an audio-visual representation through cross-view binauralization: estimating binaural sound from one view, given images and sound from another. Our model can successfully estimate accurate rotations on both real and synthetic scenes, and localize sound sources with accuracy competitive with state-of-the-art self-supervised approaches. Project site: https://ificl.github.io/SLfM/ | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 352,807 |
2401.07381 | Diagrammatic Rules for Triad Census | In network theory, a triad census is a method designed to categorize and enumerate the various types of subgraphs with three nodes and their connecting edges within a network. Triads serve as fundamental building blocks for comprehending the structure and dynamics of networks, and the triad census offers a systematic approach to their classification. Typically, triad counts are obtained numerically, but lesser-known methods have been developed to precisely evaluate them without the need for sampling. In our study, we build upon Moody's matrix approach, presenting general diagrammatic rules that systematically and intuitively generate closed formulas for the occurrence numbers of triads in a network. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 421,514 |
1004.4448 | Deblured Gaussian Blurred Images | This paper attempts to undertake the study of Restored Gaussian Blurred Images. by using four types of techniques of deblurring image as Wiener filter, Regularized filter, Lucy Richardson deconvlutin algorithm and Blind deconvlution algorithm with an information of the Point Spread Function (PSF) corrupted blurred image with Different values of Size and Alfa and then corrupted by Gaussian noise. The same is applied to the remote sensing image and they are compared with one another, So as to choose the base technique for restored or deblurring image.This paper also attempts to undertake the study of restored Gaussian blurred image with no any information about the Point Spread Function (PSF) by using same four techniques after execute the guess of the PSF, the number of iterations and the weight threshold of it. To choose the base guesses for restored or deblurring image of this techniques. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 6,274 |
2106.05391 | Fairness-Aware Node Representation Learning | Node representation learning has demonstrated its effectiveness for various applications on graphs. Particularly, recent developments in contrastive learning have led to promising results in unsupervised node representation learning for a number of tasks. Despite the success of graph contrastive learning and consequent growing interest, fairness is largely under-explored in the field. To this end, this study addresses fairness issues in graph contrastive learning with fairness-aware graph augmentation designs, through adaptive feature masking and edge deletion. In the study, different fairness notions on graphs are introduced, which serve as guidelines for the proposed graph augmentations. Furthermore, theoretical analysis is provided to quantitatively prove that the proposed feature masking approach can reduce intrinsic bias. Experimental results on real social networks are presented to demonstrate that the proposed augmentations can enhance fairness in terms of statistical parity and equal opportunity, while providing comparable classification accuracy to state-of-the-art contrastive methods for node classification. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 240,073 |
1611.08681 | Truthful Spectrum Auction for Efficient Anti-Jamming in Cognitive Radio
Networks | One significant challenge in cognitive radio networks is to design a framework in which the selfish secondary users are obliged to interact with each other truthfully. Moreover, due to the vulnerability of these networks against jamming attacks, designing anti-jamming defense mechanisms is equally important. %providing the security defense is also of great importance. In this paper, we propose a truthful mechanism, robust against the jamming, for a dynamic stochastic cognitive radio network consisting of several selfish secondary users and a malicious user. In this model, each secondary user participates in an auction and wish to use the unjammed spectrum, and the malicious user aims at jamming a channel by corrupting the communication link. A truthful auction mechanism is designed among the secondary users. Furthermore, a zero-sum game is formulated between the set of secondary users and the malicious user. This joint problem is then cast as a randomized two-level auctions in which the first auction allocates the vacant channels, and then the second one assigns the remaining unallocated channels. We have also changed this solution to a trustful distributed scheme. Simulation results show that the distributed algorithm can achieve a performance that is close to the centralized algorithm, without the added overhead and complexity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 64,542 |
2306.02709 | Comparative Study on Semi-supervised Learning Applied for Anomaly
Detection in Hydraulic Condition Monitoring System | Condition-based maintenance is becoming increasingly important in hydraulic systems. However, anomaly detection for these systems remains challenging, especially since that anomalous data is scarce and labeling such data is tedious and even dangerous. Therefore, it is advisable to make use of unsupervised or semi-supervised methods, especially for semi-supervised learning which utilizes unsupervised learning as a feature extraction mechanism to aid the supervised part when only a small number of labels are available. This study systematically compares semi-supervised learning methods applied for anomaly detection in hydraulic condition monitoring systems. Firstly, thorough data analysis and feature learning were carried out to understand the open-sourced hydraulic condition monitoring dataset. Then, various methods were implemented and evaluated including traditional stand-alone semi-supervised learning models (e.g., one-class SVM, Robust Covariance), ensemble models (e.g., Isolation Forest), and deep neural network based models (e.g., autoencoder, Hierarchical Extreme Learning Machine (HELM)). Typically, this study customized and implemented an extreme learning machine based semi-supervised HELM model and verified its superiority over other semi-supervised methods. Extensive experiments show that the customized HELM model obtained state-of-the-art performance with the highest accuracy (99.5%), the lowest false positive rate (0.015), and the best F1-score (0.985) beating other semi-supervised methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 371,022 |
2410.17897 | Value Residual Learning | While Transformer models have achieved remarkable success in various domains, the effectiveness of information propagation through deep networks remains a critical challenge. Standard hidden state residuals often fail to adequately preserve initial token-level information in deeper layers. This paper introduces ResFormer, a novel architecture that enhances information flow by incorporating value residual connections in addition to hidden state residuals. And a variant is the SVFormer, where all layers share the first layer's value embedding. Comprehensive empirical evidence demonstrates ResFormer achieves equivalent validation loss with 13.3\% fewer model parameters and 15.4\% less training data compared to Transformer, while maintaining similar memory usage and computational cost. Besides, SVFormer reduces KV cache size by nearly half with only a small performance penalty and can be integrated with other KV-efficient methods, yielding further reductions in KV cache, with performance influenced by sequence length and cumulative learning rate. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 501,657 |
2404.02684 | Cross-Architecture Transfer Learning for Linear-Cost Inference
Transformers | Recently, multiple architectures has been proposed to improve the efficiency of the Transformer Language Models through changing the design of the self-attention block to have a linear-cost inference (LCI). A notable approach in this realm is the State-Space Machines (SSMs) architecture, which showed on-par performance on language modeling tasks with the self-attention transformers. However, such an architectural change requires a full pretraining of the weights from scratch, which incurs a huge cost to researchers and practitioners who want to use the new architectures. In the more traditional linear attention works, it has been proposed to approximate full attention with linear attention by swap-and-finetune framework. Motivated by this approach, we propose Cross-Architecture Transfer Learning (XATL), in which the weights of the shared components between LCI and self-attention-based transformers, such as layernorms, MLPs, input/output embeddings, are directly transferred to the new architecture from already pre-trained model parameters. We experimented the efficacy of the method on varying sizes and alternative attention architectures and show that \methodabbr significantly reduces the training time up to 2.5x times and converges to a better minimum with up to 2.6% stronger model on the LM benchmarks within the same compute budget. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 443,964 |
2103.02227 | Data Augmentation with Hierarchical SQL-to-Question Generation for
Cross-domain Text-to-SQL Parsing | Data augmentation has attracted a lot of research attention in the deep learning era for its ability in alleviating data sparseness. The lack of labeled data for unseen evaluation databases is exactly the major challenge for cross-domain text-to-SQL parsing. Previous works either require human intervention to guarantee the quality of generated data, or fail to handle complex SQL queries. This paper presents a simple yet effective data augmentation framework. First, given a database, we automatically produce a large number of SQL queries based on an abstract syntax tree grammar. For better distribution matching, we require that at least 80% of SQL patterns in the training data are covered by generated queries. Second, we propose a hierarchical SQL-to-question generation model to obtain high-quality natural language questions, which is the major contribution of this work. Finally, we design a simple sampling strategy that can greatly improve training efficiency given large amounts of generated data. Experiments on three cross-domain datasets, i.e., WikiSQL and Spider in English, and DuSQL in Chinese, show that our proposed data augmentation framework can consistently improve performance over strong baselines, and the hierarchical generation component is the key for the improvement. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 222,894 |
2103.13610 | An Approach to Improve Robustness of NLP Systems against ASR Errors | Speech-enabled systems typically first convert audio to text through an automatic speech recognition (ASR) model and then feed the text to downstream natural language processing (NLP) modules. The errors of the ASR system can seriously downgrade the performance of the NLP modules. Therefore, it is essential to make them robust to the ASR errors. Previous work has shown it is effective to employ data augmentation methods to solve this problem by injecting ASR noise during the training process. In this paper, we utilize the prevalent pre-trained language model to generate training samples with ASR-plausible noise. Compare to the previous methods, our approach generates ASR noise that better fits the real-world error distribution. Experimental results on spoken language translation(SLT) and spoken language understanding (SLU) show that our approach effectively improves the system robustness against the ASR errors and achieves state-of-the-art results on both tasks. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 226,554 |
2103.04130 | Learning to Generate 3D Shapes with Generative Cellular Automata | We present a probabilistic 3D generative model, named Generative Cellular Automata, which is able to produce diverse and high quality shapes. We formulate the shape generation process as sampling from the transition kernel of a Markov chain, where the sampling chain eventually evolves to the full shape of the learned distribution. The transition kernel employs the local update rules of cellular automata, effectively reducing the search space in a high-resolution 3D grid space by exploiting the connectivity and sparsity of 3D shapes. Our progressive generation only focuses on the sparse set of occupied voxels and their neighborhood, thus enabling the utilization of an expressive sparse convolutional network. We propose an effective training scheme to obtain the local homogeneous rule of generative cellular automata with sequences that are slightly different from the sampling chain but converge to the full shapes in the training data. Extensive experiments on probabilistic shape completion and shape generation demonstrate that our method achieves competitive performance against recent methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 223,532 |
2204.02772 | Detail-recovery Image Deraining via Dual Sample-augmented Contrastive
Learning | The intricacy of rainy image contents often leads cutting-edge deraining models to image degradation including remnant rain, wrongly-removed details, and distorted appearance. Such degradation is further exacerbated when applying the models trained on synthetic data to real-world rainy images. We observe two types of domain gaps between synthetic and real-world rainy images: one exists in rain streak patterns; the other is the pixel-level appearance of rain-free images. To bridge the two domain gaps, we propose a semi-supervised detail-recovery image deraining network (Semi-DRDNet) with dual sample-augmented contrastive learning. Semi-DRDNet consists of three sub-networks:i) for removing rain streaks without remnants, we present a squeeze-and-excitation based rain residual network; ii) for encouraging the lost details to return, we construct a structure detail context aggregation based detail repair network; to our knowledge, this is the first time; and iii) for building efficient contrastive constraints for both rain streaks and clean backgrounds, we exploit a novel dual sample-augmented contrastive regularization network.Semi-DRDNet operates smoothly on both synthetic and real-world rainy data in terms of deraining robustness and detail accuracy. Comparisons on four datasets including our established Real200 show clear improvements of Semi-DRDNet over fifteen state-of-the-art methods. Code and dataset are available at https://github.com/syy-whu/DRD-Net. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 290,079 |
2305.08967 | Forecast-based charging strategy to prolong the lifetime of lithium-ion
batteries in standalone PV battery systems in Sub-Saharan Africa | Standalone PV battery systems have great potential to power the one billion people worldwide who lack access to electricity. Due to remoteness and poverty, durable and inexpensive systems are required for a broad range of applications. However, todays PV battery systems do not yet fully meet this requirement. Especially batteries still prove to be a hindrance, as they represent the most expensive and fastest aging component in a PV battery system. This work aims to address this by prolonging battery life. For this purpose, a forecast-based charging strategy was developed. As lithium-ion batteries age slower in a low state of charge, the goal of the operation strategy is to only charge the battery as much as needed. The impact of the proposed charging strategy is examined in a case study using one year of historical data of 14 standalone systems in Nigeria. It was found that the proposed operation strategy could reduce the average battery state of charge by around 20 percent without causing power outages for the mini-grids. This would significantly extend the life of the battery and ultimately lead to a more durable and cheaper operation of standalone PV battery systems. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 364,465 |
2007.01787 | Evaluating Uncertainty Estimation Methods on 3D Semantic Segmentation of
Point Clouds | Deep learning models are extensively used in various safety critical applications. Hence these models along with being accurate need to be highly reliable. One way of achieving this is by quantifying uncertainty. Bayesian methods for UQ have been extensively studied for Deep Learning models applied on images but have been less explored for 3D modalities such as point clouds often used for Robots and Autonomous Systems. In this work, we evaluate three uncertainty quantification methods namely Deep Ensembles, MC-Dropout and MC-DropConnect on the DarkNet21Seg 3D semantic segmentation model and comprehensively analyze the impact of various parameters such as number of models in ensembles or forward passes, and drop probability values, on task performance and uncertainty estimate quality. We find that Deep Ensembles outperforms other methods in both performance and uncertainty metrics. Deep ensembles outperform other methods by a margin of 2.4% in terms of mIOU, 1.3% in terms of accuracy, while providing reliable uncertainty for decision making. | false | false | false | false | false | false | true | true | false | false | false | true | false | false | false | false | false | false | 185,532 |
2105.04475 | Self-Guided Curriculum Learning for Neural Machine Translation | In the field of machine learning, the well-trained model is assumed to be able to recover the training labels, i.e. the synthetic labels predicted by the model should be as close to the ground-truth labels as possible. Inspired by this, we propose a self-guided curriculum strategy to encourage the learning of neural machine translation (NMT) models to follow the above recovery criterion, where we cast the recovery degree of each training example as its learning difficulty. Specifically, we adopt the sentence level BLEU score as the proxy of recovery degree. Different from existing curricula relying on linguistic prior knowledge or third-party language models, our chosen learning difficulty is more suitable to measure the degree of knowledge mastery of the NMT models. Experiments on translation benchmarks, including WMT14 English$\Rightarrow$German and WMT17 Chinese$\Rightarrow$English, demonstrate that our approach can consistently improve translation performance against strong baseline Transformer. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 234,519 |
2408.00762 | UniTalker: Scaling up Audio-Driven 3D Facial Animation through A Unified
Model | Audio-driven 3D facial animation aims to map input audio to realistic facial motion. Despite significant progress, limitations arise from inconsistent 3D annotations, restricting previous models to training on specific annotations and thereby constraining the training scale. In this work, we present UniTalker, a unified model featuring a multi-head architecture designed to effectively leverage datasets with varied annotations. To enhance training stability and ensure consistency among multi-head outputs, we employ three training strategies, namely, PCA, model warm-up, and pivot identity embedding. To expand the training scale and diversity, we assemble A2F-Bench, comprising five publicly available datasets and three newly curated datasets. These datasets contain a wide range of audio domains, covering multilingual speech voices and songs, thereby scaling the training data from commonly employed datasets, typically less than 1 hour, to 18.5 hours. With a single trained UniTalker model, we achieve substantial lip vertex error reductions of 9.2% for BIWI dataset and 13.7% for Vocaset. Additionally, the pre-trained UniTalker exhibits promise as the foundation model for audio-driven facial animation tasks. Fine-tuning the pre-trained UniTalker on seen datasets further enhances performance on each dataset, with an average error reduction of 6.3% on A2F-Bench. Moreover, fine-tuning UniTalker on an unseen dataset with only half the data surpasses prior state-of-the-art models trained on the full dataset. The code and dataset are available at the project page https://github.com/X-niper/UniTalker. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 477,964 |
2308.15237 | Assessing Cyclostationary Malware Detection via Feature Selection and
Classification | Cyclostationarity involves periodic statistical variations in signals and processes, commonly used in signal analysis and network security. In the context of attacks, cyclostationarity helps detect malicious behaviors within network traffic, such as traffic patterns in Distributed Denial of Service (DDoS) attacks or hidden communication channels in malware. This approach enhances security by identifying abnormal patterns and informing Network Intrusion Detection Systems (NIDSs) to recognize potential attacks, enhancing protection against both known and novel threats. This research focuses on identifying cyclostationary malware behavior and its detection. The main goal is to pinpoint essential cyclostationary features used in NIDSs. These features are extracted using algorithms such as Boruta and Principal Component Analysis (PCA), and then categorized to find the most significant cyclostationary patterns. The aim of this article is to reveal periodically changing malware behaviors through cyclostationarity. The study highlights the importance of spotting cyclostationary malware in NIDSs by using established datasets like KDD99, NSL-KDD, and the UGRansome dataset. The UGRansome dataset is designed for anomaly detection research and includes both normal and abnormal network threat categories of zero-day attacks. A comparison is made using the Random Forest (RF) and Support Vector Machine (SVM) algorithms, while also evaluating the effectiveness of Boruta and PCA. The findings show that PCA is more promising than using Boruta alone for extracting cyclostationary network feature patterns. Additionally, the analysis identifies the internet protocol as the most noticeable cyclostationary feature pattern used by malware. Notably, the UGRansome dataset outperforms the KDD99 and NSL-KDD, achieving 99% accuracy in signature malware detection using the RF algorithm and 98% with the SVM. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 388,616 |
2312.03406 | SVQ: Sparse Vector Quantization for Spatiotemporal Forecasting | Spatio-temporal forecasting, pivotal in numerous fields, hinges on the delicate equilibrium between isolating nuanced patterns and sifting out noise. To tackle this, we introduce Sparse Regression-based Vector Quantization (SVQ), a novel technique that leverages sparse regression for succinct representation, an approach theoretically and practically favored over classical clustering-based vector quantization methods. This approach preserves critical details from the original vectors using a regression model while filtering out noise via sparse design. Moreover, we approximate the sparse regression process using a blend of a two-layer MLP and an extensive codebook. This approach not only substantially cuts down on computational costs but also grants SVQ differentiability and training simplicity, resulting in a notable enhancement of performance. Our empirical studies on five spatial-temporal benchmark datasets demonstrate that SVQ achieves state-of-the-art results. Specifically, on the WeatherBench-S temperature dataset, SVQ improves the top baseline by 7.9%. In video prediction benchmarks-Human, KTH, and KittiCaltech-it reduces MAE by an average of 9.4% and improves image quality by 17.3% (LPIPS). | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 413,243 |
2406.19783 | NLPerturbator: Studying the Robustness of Code LLMs to Natural Language
Variations | Large language models (LLMs) achieve promising results in code generation based on a given natural language description. They have been integrated into open-source projects and commercial products to facilitate daily coding activities. The natural language description in the prompt is crucial for LLMs to comprehend users' requirements. Prior studies uncover that LLMs are sensitive to the changes in the prompts, including slight changes that look inconspicuous. However, the natural language descriptions often vary in real-world scenarios (e.g., different formats, grammar, and wording). Prior studies on the robustness of LLMs are often based on random perturbations and such perturbations may not actually happen. In this paper, we conduct a comprehensive study to investigate how are code LLMs robust to variations of natural language description in real-world scenarios. We summarize 18 categories of perturbations of natural language and 3 combinations of co-occurred categories based on our literature review and an online survey with practitioners. We propose an automated framework, NLPerturbator, which can perform perturbations of each category given a set of prompts. Through a series of experiments on code generation using six code LLMs, we find that the perturbed prompts can decrease the performance of code generation by a considerable margin (e.g., up to 21.2%, and 4.8% to 6.1% on average). Our study highlights the importance of enhancing the robustness of LLMs to real-world variations in the prompts, as well as the essentiality of attentively constructing the prompts. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | 468,564 |
2409.03906 | Analytical Optimized Traffic Flow Recovery for Large-scale Urban
Transportation Network | The implementation of intelligent transportation systems (ITS) has enhanced data collection in urban transportation through advanced traffic sensing devices. However, the high costs associated with installation and maintenance result in sparse traffic data coverage. To obtain complete, accurate, and high-resolution network-wide traffic flow data, this study introduces the Analytical Optimized Recovery (AOR) approach that leverages abundant GPS speed data alongside sparse flow data to estimate traffic flow in large-scale urban networks. The method formulates a constrained optimization framework that utilizes a quadratic objective function with l2 norm regularization terms to address the traffic flow recovery problem effectively and incorporates a Lagrangian relaxation technique to maintain non-negativity constraints. The effectiveness of this approach was validated in a large urban network in Shenzhen's Futian District using the Simulation of Urban MObility (SUMO) platform. Analytical results indicate that the method achieves low estimation errors, affirming its suitability for comprehensive traffic analysis in urban settings with limited sensor deployment. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 486,218 |
2203.02533 | BoostMIS: Boosting Medical Image Semi-supervised Learning with Adaptive
Pseudo Labeling and Informative Active Annotation | In this paper, we propose a novel semi-supervised learning (SSL) framework named BoostMIS that combines adaptive pseudo labeling and informative active annotation to unleash the potential of medical image SSL models: (1) BoostMIS can adaptively leverage the cluster assumption and consistency regularization of the unlabeled data according to the current learning status. This strategy can adaptively generate one-hot "hard" labels converted from task model predictions for better task model training. (2) For the unselected unlabeled images with low confidence, we introduce an Active learning (AL) algorithm to find the informative samples as the annotation candidates by exploiting virtual adversarial perturbation and model's density-aware entropy. These informative candidates are subsequently fed into the next training cycle for better SSL label propagation. Notably, the adaptive pseudo-labeling and informative active annotation form a learning closed-loop that are mutually collaborative to boost medical image SSL. To verify the effectiveness of the proposed method, we collected a metastatic epidural spinal cord compression (MESCC) dataset that aims to optimize MESCC diagnosis and classification for improved specialist referral and treatment. We conducted an extensive experimental study of BoostMIS on MESCC and another public dataset COVIDx. The experimental results verify our framework's effectiveness and generalisability for different medical image datasets with a significant improvement over various state-of-the-art methods. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 283,775 |
2406.19388 | Taming Data and Transformers for Audio Generation | Generating ambient sounds is a challenging task due to data scarcity and often insufficient caption quality, making it difficult to employ large-scale generative models for the task. In this work, we tackle this problem by introducing two new models. First, we propose AutoCap, a high-quality and efficient automatic audio captioning model. By using a compact audio representation and leveraging audio metadata, AutoCap substantially enhances caption quality, reaching a CIDEr score of 83.2, marking a 3.2% improvement from the best available captioning model at four times faster inference speed. Second, we propose GenAu, a scalable transformer-based audio generation architecture that we scale up to 1.25B parameters. Using AutoCap to generate caption clips from existing audio datasets, we demonstrate the benefits of data scaling with synthetic captions as well as model size scaling. When compared to state-of-the-art audio generators trained at similar size and data scale, GenAu obtains significant improvements of 4.7% in FAD score, 22.7% in IS, and 13.5% in CLAP score, indicating significantly improved quality of generated audio compared to previous works. Moreover, we propose an efficient and scalable pipeline for collecting audio datasets, enabling us to compile 57M ambient audio clips, forming AutoReCap-XL, the largest available audio-text dataset, at 90 times the scale of existing ones. Our code, model checkpoints, and dataset are publicly available. | false | false | true | false | false | false | false | false | true | false | false | true | false | false | false | false | false | true | 468,399 |
cs/0404032 | When Do Differences Matter? On-Line Feature Extraction Through Cognitive
Economy | For an intelligent agent to be truly autonomous, it must be able to adapt its representation to the requirements of its task as it interacts with the world. Most current approaches to on-line feature extraction are ad hoc; in contrast, this paper presents an algorithm that bases judgments of state compatibility and state-space abstraction on principled criteria derived from the psychological principle of cognitive economy. The algorithm incorporates an active form of Q-learning, and partitions continuous state-spaces by merging and splitting Voronoi regions. The experiments illustrate a new methodology for testing and comparing representations by means of learning curves. Results from the puck-on-a-hill task demonstrate the algorithm's ability to learn effective representations, superior to those produced by some other, well-known, methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | false | false | 538,155 |
1802.01572 | MotifNet: a motif-based Graph Convolutional Network for directed graphs | Deep learning on graphs and in particular, graph convolutional neural networks, have recently attracted significant attention in the machine learning community. Many of such techniques explore the analogy between the graph Laplacian eigenvectors and the classical Fourier basis, allowing to formulate the convolution as a multiplication in the spectral domain. One of the key drawback of spectral CNNs is their explicit assumption of an undirected graph, leading to a symmetric Laplacian matrix with orthogonal eigendecomposition. In this work we propose MotifNet, a graph CNN capable of dealing with directed graphs by exploiting local graph motifs. We present experimental evidence showing the advantage of our approach on real data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 89,637 |
1601.03793 | Stochastic Geometry Analysis of Multi-Antenna Two-Tier Cellular Networks | In this paper, we study the key properties of multi-antenna two-tier networks under different system configurations. Based on stochastic geometry, we derive the expressions and approximations for the users' average data rate. Through the more tractable approximations, the theoretical analysis can be greatly simplified. We find that the differences in density and transmit power between two tiers, together with range expansion bias significantly affect the users' data rate. Besides, for the purpose of area spectral efficiency (ASE) maximization, we find that the optimal number of active users for each tier is approximately fixed portion of the sum of the number of antennas plus one. Interestingly, the optimal settings are insensitive to different configurations between two tiers. Last but not the least, if the number of antennas of macro base stations (MBSs) is sufficiently larger than that of small cell base stations (SBSs), we find that range expansion will improve ASE. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 50,947 |
2305.18034 | A Corpus for Sentence-level Subjectivity Detection on English News
Articles | We develop novel annotation guidelines for sentence-level subjectivity detection, which are not limited to language-specific cues. We use our guidelines to collect NewsSD-ENG, a corpus of 638 objective and 411 subjective sentences extracted from English news articles on controversial topics. Our corpus paves the way for subjectivity detection in English and across other languages without relying on language-specific tools, such as lexicons or machine translation. We evaluate state-of-the-art multilingual transformer-based models on the task in mono-, multi-, and cross-language settings. For this purpose, we re-annotate an existing Italian corpus. We observe that models trained in the multilingual setting achieve the best performance on the task. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 368,839 |
1810.05401 | A Gentle Introduction to Deep Learning in Medical Image Processing | This paper tries to give a gentle introduction to deep learning in medical image processing, proceeding from theoretical foundations to applications. We first discuss general reasons for the popularity of deep learning, including several major breakthroughs in computer science. Next, we start reviewing the fundamental basics of the perceptron and neural networks, along with some fundamental theory that is often omitted. Doing so allows us to understand the reasons for the rise of deep learning in many application domains. Obviously medical image processing is one of these areas which has been largely affected by this rapid progress, in particular in image detection and recognition, image segmentation, image registration, and computer-aided diagnosis. There are also recent trends in physical simulation, modelling, and reconstruction that have led to astonishing results. Yet, some of these approaches neglect prior knowledge and hence bear the risk of producing implausible results. These apparent weaknesses highlight current limitations of deep learning. However, we also briefly discuss promising approaches that might be able to resolve these problems in the future. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 110,220 |
2311.18103 | Corner-to-Center Long-range Context Model for Efficient Learned Image
Compression | In the framework of learned image compression, the context model plays a pivotal role in capturing the dependencies among latent representations. To reduce the decoding time resulting from the serial autoregressive context model, the parallel context model has been proposed as an alternative that necessitates only two passes during the decoding phase, thus facilitating efficient image compression in real-world scenarios. However, performance degradation occurs due to its incomplete casual context. To tackle this issue, we conduct an in-depth analysis of the performance degradation observed in existing parallel context models, focusing on two aspects: the Quantity and Quality of information utilized for context prediction and decoding. Based on such analysis, we propose the \textbf{Corner-to-Center transformer-based Context Model (C$^3$M)} designed to enhance context and latent predictions and improve rate-distortion performance. Specifically, we leverage the logarithmic-based prediction order to predict more context features from corner to center progressively. In addition, to enlarge the receptive field in the analysis and synthesis transformation, we use the Long-range Crossing Attention Module (LCAM) in the encoder/decoder to capture the long-range semantic information by assigning the different window shapes in different channels. Extensive experimental evaluations show that the proposed method is effective and outperforms the state-of-the-art parallel methods. Finally, according to the subjective analysis, we suggest that improving the detailed representation in transformer-based image compression is a promising direction to be explored. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 411,558 |
2501.14443 | Learning more with the same effort: how randomization improves the
robustness of a robotic deep reinforcement learning agent | The industrial application of Deep Reinforcement Learning (DRL) is frequently slowed down because of the inability to generate the experience required to train the models. Collecting data often involves considerable time and economic effort that is unaffordable in most cases. Fortunately, devices like robots can be trained with synthetic experience thanks to virtual environments. With this approach, the sample efficiency problems of artificial agents are mitigated, but another issue arises: the need for efficiently transferring the synthetic experience into the real world (sim-to-real). This paper analyzes the robustness of a state-of-the-art sim-to-real technique known as progressive neural networks (PNNs) and studies how adding diversity to the synthetic experience can complement it. To better understand the drivers that lead to a lack of robustness, the robotic agent is still tested in a virtual environment to ensure total control on the divergence between the simulated and real models. The results show that a PNN-like agent exhibits a substantial decrease in its robustness at the beginning of the real training phase. Randomizing certain variables during simulation-based training significantly mitigates this issue. On average, the increase in the model's accuracy is around 25% when diversity is introduced in the training process. This improvement can be translated into a decrease in the required real experience for the same final robustness performance. Notwithstanding, adding real experience to agents should still be beneficial regardless of the quality of the virtual experience fed into the agent. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 527,121 |
1810.10962 | Batch Normalization Sampling | Deep Neural Networks (DNNs) thrive in recent years in which Batch Normalization (BN) plays an indispensable role. However, it has been observed that BN is costly due to the reduction operations. In this paper, we propose alleviating this problem through sampling only a small fraction of data for normalization at each iteration. Specifically, we model it as a statistical sampling problem and identify that by sampling less correlated data, we can largely reduce the requirement of the number of data for statistics estimation in BN, which directly simplifies the reduction operations. Based on this conclusion, we propose two sampling strategies, "Batch Sampling" (randomly select several samples from each batch) and "Feature Sampling" (randomly select a small patch from each feature map of all samples), that take both computational efficiency and sample correlation into consideration. Furthermore, we introduce an extremely simple variant of BN, termed as Virtual Dataset Normalization (VDN), that can normalize the activations well with few synthetical random samples. All the proposed methods are evaluated on various datasets and networks, where an overall training speedup by up to 20% on GPU is practically achieved without the support of any specialized libraries, and the loss on accuracy and convergence rate are negligible. Finally, we extend our work to the "micro-batch normalization" problem and yield comparable performance with existing approaches at the case of tiny batch size. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 111,410 |
2411.00006 | Personality-Guided Code Generation Using Large Language Models | Code generation, the automatic creation of source code from natural language descriptions, has garnered significant attention due to its potential to streamline software development. Inspired by research that links task-personality alignment with improved development outcomes, we conduct an empirical study on personality-guided code generation using large language models (LLMs). Specifically, we investigate how emulating personality traits appropriate to the coding tasks affects LLM performance. We extensively evaluate this approach using seven widely adopted LLMs across four representative datasets. Our results show that personality guidance significantly enhances code generation accuracy, with improved pass rates in 23 out of 28 LLM-dataset combinations. Notably, in 11 cases, the improvement exceeds 5%, and in 5 instances, it surpasses 10%, with the highest gain reaching 12.9%. Additionally, personality guidance can be easily integrated with other prompting strategies to further boost performance. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 504,395 |
2402.17532 | Retrieval is Accurate Generation | Standard language models generate text by selecting tokens from a fixed, finite, and standalone vocabulary. We introduce a novel method that selects context-aware phrases from a collection of supporting documents. One of the most significant challenges for this paradigm shift is determining the training oracles, because a string of text can be segmented in various ways and each segment can be retrieved from numerous possible documents. To address this, we propose to initialize the training oracles using linguistic heuristics and, more importantly, bootstrap the oracles through iterative self-reinforcement. Extensive experiments show that our model not only outperforms standard language models on a variety of knowledge-intensive tasks but also demonstrates improved generation quality in open-ended text generation. For instance, compared to the standard language model counterpart, our model raises the accuracy from 23.47% to 36.27% on OpenbookQA, and improves the MAUVE score from 42.61% to 81.58% in open-ended text generation. Remarkably, our model also achieves the best performance and the lowest latency among several retrieval-augmented baselines. In conclusion, we assert that retrieval is more accurate generation and hope that our work will encourage further research on this new paradigm shift. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 433,038 |
2007.14555 | Self-Secure Capacity-Achieving Feedback Schemes of Gaussian
Multiple-Access Wiretap Channels with Degraded Message Sets | The Schalkwijk-Kailath (SK) scheme, which achieves the capacity of the point-to-point white Gaussian channel with feedback, is secure by itself and also achieves the secrecy capacity of the Gaussian wiretap channel with feedback, i.e., the SK scheme is a self-secure capacity-achieving (SSCA) feedback scheme for the Gaussian wiretap channel. For the multi-user wiretap channels, recently, it has been shown that Ozarow's capacity-achieving feedback scheme for the two-user Gaussian multiple-access channel (GMAC) is the SSCA feedback scheme for the two-user Gaussian multiple-access wiretap channel (GMAC-WT). In this paper, first, we propose a capacity-achieving feedback scheme for the two-user GMAC with degraded message sets (GMAC-DMS), and show that this scheme is the SSCA feedback scheme for the two-user GMAC-WT with degraded message sets (GMAC-WT-DMS). Next, we extend the above scheme to the two-user GMAC-DMS with noncausal channel state information at the transmitters (NCSIT), and show that the extended scheme is capacity-achieving and also a SSCA feedback scheme for the two-user GMAC-WT-DMS with NCSIT. Finally, we derive outer bounds on the secrecy capacity regions of the two-user GMAC-WT-DMS with or without NCSIT, and numerical results show the rate gains by the feedback. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 189,434 |
2106.11378 | Dual-port grid-forming control of MMCs and its applications to grids of
grids | This work focuses on grid-forming (GFM) control of Interconnecting Power Converters (IPCs) that are used to interconnect multiple HVAC and HVDC subgrids to form a grid of grids. We introduce the concept of dual-port GFM control that leverages the ability of Modular Multilevel Converters (MMCs) to simultaneously form its AC and DC terminal voltage and present two dual-port GFM MMC controls. We provide analytical results and high-fidelity simulations that demonstrate that (i) dual-port GFM control is more resilient to contingencies (i.e., line and generator outages) than state-of-the-art single-port GFM control, and (ii) unlike single-port GFM control, dual-port GFM control does not require assigning grid-forming and grid-following (GFL) roles to the IPC terminals in grids of grids. Finally, we provide an in-depth discussion and comparison of single-port GFM control and the proposed dual-port GFM controls. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 242,371 |
1603.02351 | Biologically inspired model simulating visual pathways and cerebellum
function in human - Achieving visuomotor coordination and high precision
movement with learning ability | In recent years, the interdisciplinary research between information science and neuroscience has been a hotspot. In this paper, based on recent biological findings, we proposed a new model to mimic visual information processing, motor planning and control in central and peripheral nervous systems of human. Main steps of the model are as follows: 1) Simulating "where" pathway in human: the Selective Search method is applied to simulate the function of human dorsal visual pathway to localize object candidates; 2) Simulating "what" pathway in human: a Convolutional Deep Belief Network is applied to simulate the hierarchical structure and function of human ventral visual pathway for object recognition; 3) Simulating motor planning process in human: habitual motion planning process in human is simulated, and motor commands are generated from the combination of control signals from past experiences; 4) Simulating precise movement control in human: calibrated control signals, which mimic the adjustment for movement from cerebellum in human, are generated and updated from calibration of movement errors in past experiences, and sent to the movement model to achieve high precision. The proposed framework mimics structures and functions of human recognition, visuomotor coordination and precise motor control. Experiments on object localization, recognition and movement control demonstrate that the new proposed model can not only accomplish visuomotor coordination tasks, but also achieve high precision movement with learning ability. Meanwhile, the results also prove the validity of the introduced mechanisms. Furthermore, the proposed model could be generalized and applied to other systems, such as mechanical and electrical systems in robotics, to achieve fast response, high precision movement with learning ability. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 53,002 |
2210.04313 | On the Need of Analog Signals and Systems for Digital-Twin
Representations | We consider the task of converting different digital descriptions of analog bandlimited signals and systems into each other, with a rigorous application of mathematical computability theory. Albeit very fundamental, the problem appears in the scope of digital twinning, an emerging concept in the field of digital processing of analog information that is regularly mentioned as one of the key enablers for next-generation cyber-physical systems and their areas of application. In this context, we prove that essential quantities such as the peak-to-average power ratio and the bounded-input/bounded-output norm, which determine the behavior of the real-world analog system, cannot generally be determined from the system's digital twin, depending on which of the above-mentioned descriptions is chosen. As a main result, we characterize the algorithmic strength of Shannon's sampling type representation as digital twin implementation and also introduce a new digital twin implementation of analog signals and systems. We show there exist two digital descriptions, both of which uniquely characterize a certain analog system, such that one description can be algorithmically converted into the other, but not vice versa. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 322,408 |
1809.05820 | Cross-Domain Labeled LDA for Cross-Domain Text Classification | Cross-domain text classification aims at building a classifier for a target domain which leverages data from both source and target domain. One promising idea is to minimize the feature distribution differences of the two domains. Most existing studies explicitly minimize such differences by an exact alignment mechanism (aligning features by one-to-one feature alignment, projection matrix etc.). Such exact alignment, however, will restrict models' learning ability and will further impair models' performance on classification tasks when the semantic distributions of different domains are very different. To address this problem, we propose a novel group alignment which aligns the semantics at group level. In addition, to help the model learn better semantic groups and semantics within these groups, we also propose a partial supervision for model's learning in source domain. To this end, we embed the group alignment and a partial supervision into a cross-domain topic model, and propose a Cross-Domain Labeled LDA (CDL-LDA). On the standard 20Newsgroup and Reuters dataset, extensive quantitative (classification, perplexity etc.) and qualitative (topic detection) experiments are conducted to show the effectiveness of the proposed group alignment and partial supervision. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 107,881 |
1701.01909 | Tracking The Untrackable: Learning To Track Multiple Cues with Long-Term
Dependencies | The majority of existing solutions to the Multi-Target Tracking (MTT) problem do not combine cues in a coherent end-to-end fashion over a long period of time. However, we present an online method that encodes long-term temporal dependencies across multiple cues. One key challenge of tracking methods is to accurately track occluded targets or those which share similar appearance properties with surrounding objects. To address this challenge, we present a structure of Recurrent Neural Networks (RNN) that jointly reasons on multiple cues over a temporal window. We are able to correct many data association errors and recover observations from an occluded state. We demonstrate the robustness of our data-driven approach by tracking multiple targets using their appearance, motion, and even interactions. Our method outperforms previous works on multiple publicly available datasets including the challenging MOT benchmark. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 66,471 |
cs/0512015 | Joint fixed-rate universal lossy coding and identification of
continuous-alphabet memoryless sources | The problem of joint universal source coding and identification is considered in the setting of fixed-rate lossy coding of continuous-alphabet memoryless sources. For a wide class of bounded distortion measures, it is shown that any compactly parametrized family of $\R^d$-valued i.i.d. sources with absolutely continuous distributions satisfying appropriate smoothness and Vapnik--Chervonenkis learnability conditions, admits a joint scheme for universal lossy block coding and parameter estimation, such that when the block length $n$ tends to infinity, the overhead per-letter rate and the distortion redundancies converge to zero as $O(n^{-1}\log n)$ and $O(\sqrt{n^{-1}\log n})$, respectively. Moreover, the active source can be determined at the decoder up to a ball of radius $O(\sqrt{n^{-1} \log n})$ in variational distance, asymptotically almost surely. The system has finite memory length equal to the block length, and can be thought of as blockwise application of a time-invariant nonlinear filter with initial conditions determined from the previous block. Comparisons are presented with several existing schemes for universal vector quantization, which do not include parameter estimation explicitly, and an extension to unbounded distortion measures is outlined. Finally, finite mixture classes and exponential families are given as explicit examples of parametric sources admitting joint universal compression and modeling schemes of the kind studied here. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 539,124 |
2305.03247 | Heavy-ball-based optimal thresholding algorithms for sparse linear
inverse problems | Linear inverse problems arise in diverse engineering fields especially in signal and image reconstruction. The development of computational methods for linear inverse problems with sparsity is one of the recent trends in this field. The so-called optimal $k$-thresholding is a newly introduced method for sparse optimization and linear inverse problems. Compared to other sparsity-aware algorithms, the advantage of optimal $k$-thresholding method lies in that it performs thresholding and error metric reduction simultaneously and thus works stably and robustly for solving medium-sized linear inverse problems. However, the runtime of this method is generally high when the size of the problem is large. The purpose of this paper is to propose an acceleration strategy for this method. Specifically, we propose a heavy-ball-based optimal $k$-thresholding (HBOT) algorithm and its relaxed variants for sparse linear inverse problems. The convergence of these algorithms is shown under the restricted isometry property. In addition, the numerical performance of the heavy-ball-based relaxed optimal $k$-thresholding pursuit (HBROTP) has been evaluated, and simulations indicate that HBROTP admits robustness for signal and image reconstruction even in noisy environments. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 362,318 |
1008.3614 | Control and Optimization Meet the Smart Power Grid - Scheduling of Power
Demands for Optimal Energy Management | The smart power grid aims at harnessing information and communication technologies to enhance reliability and enforce sensible use of energy. Its realization is geared by the fundamental goal of effective management of demand load. In this work, we envision a scenario with real-time communication between the operator and consumers. The grid operator controller receives requests for power demands from consumers, with different power requirement, duration, and a deadline by which it is to be completed. The objective is to devise a power demand task scheduling policy that minimizes the grid operational cost over a time horizon. The operational cost is a convex function of instantaneous power consumption and reflects the fact that each additional unit of power needed to serve demands is more expensive as demand load increases.First, we study the off-line demand scheduling problem, where parameters are fixed and known. Next, we devise a stochastic model for the case when demands are generated continually and scheduling decisions are taken online and focus on long-term average cost. We present two instances of power consumption control based on observing current consumption. First, the controller may choose to serve a new demand request upon arrival or to postpone it to the end of its deadline. Second, the additional option exists to activate one of the postponed demands when an active demand terminates. For both instances, the optimal policies are threshold based. We derive a lower performance bound over all policies, which is asymptotically tight as deadlines increase. We propose the Controlled Release threshold policy and prove it is asymptotically optimal. The policy activates a new demand request if the current power consumption is less than a threshold, otherwise it is queued. Queued demands are scheduled when their deadline expires or when the consumption drops below the threshold. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 7,324 |
2203.15095 | Robust Speaker Recognition with Transformers Using wav2vec 2.0 | Recent advances in unsupervised speech representation learning discover new approaches and provide new state-of-the-art for diverse types of speech processing tasks. This paper presents an investigation of using wav2vec 2.0 deep speech representations for the speaker recognition task. The proposed fine-tuning procedure of wav2vec 2.0 with simple TDNN and statistic pooling back-end using additive angular margin loss allows to obtain deep speaker embedding extractor that is well-generalized across different domains. It is concluded that Contrastive Predictive Coding pretraining scheme efficiently utilizes the power of unlabeled data, and thus opens the door to powerful transformer-based speaker recognition systems. The experimental results obtained in this study demonstrate that fine-tuning can be done on relatively small sets and a clean version of data. Using data augmentation during fine-tuning provides additional performance gains in speaker verification. In this study speaker recognition systems were analyzed on a wide range of well-known verification protocols: VoxCeleb1 cleaned test set, NIST SRE 18 development set, NIST SRE 2016 and NIST SRE 2019 evaluation set, VOiCES evaluation set, NIST 2021 SRE, and CTS challenges sets. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 288,230 |
2111.14354 | Responding to Challenge Call of Machine Learning Model Development in
Diagnosing Respiratory Disease Sounds | In this study, a machine learning model was developed for automatically detecting respiratory system sounds such as sneezing and coughing in disease diagnosis. The automatic model and approach development of breath sounds, which carry valuable information, results in early diagnosis and treatment. A successful machine learning model was developed in this study, which was a strong response to the challenge called the "Pfizer digital medicine challenge" on the "OSFHOME" open access platform. "Environmental sound classification" called ESC-50 and AudioSet sound files were used to prepare the dataset. In this dataset, which consisted of three parts, features that effectively showed coughing and sneezing sound analysis were extracted from training, testing and validating samples. Based on the Mel frequency cepstral coefficients (MFCC) feature extraction method, mathematical and statistical features were prepared. Three different classification techniques were considered to perform successful respiratory sound classification in the dataset containing more than 3800 different sounds. Support vector machine (SVM) with radial basis function (RBF) kernels, ensemble aggregation and decision tree classification methods were used as classification techniques. In an attempt to classify coughing and sneezing sounds from other sounds, SVM with RBF kernels was achieved with 83% success. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 268,580 |
2402.17087 | A Note on Bayesian Networks with Latent Root Variables | We characterise the likelihood function computed from a Bayesian network with latent variables as root nodes. We show that the marginal distribution over the remaining, manifest, variables also factorises as a Bayesian network, which we call empirical. A dataset of observations of the manifest variables allows us to quantify the parameters of the empirical Bayesian net. We prove that (i) the likelihood of such a dataset from the original Bayesian network is dominated by the global maximum of the likelihood from the empirical one; and that (ii) such a maximum is attained if and only if the parameters of the Bayesian network are consistent with those of the empirical model. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 432,832 |
2303.11969 | Explain To Me: Salience-Based Explainability for Synthetic Face
Detection Models | The performance of convolutional neural networks has continued to improve over the last decade. At the same time, as model complexity grows, it becomes increasingly more difficult to explain model decisions. Such explanations may be of critical importance for reliable operation of human-machine pairing setups, or for model selection when the "best" model among many equally-accurate models must be established. Saliency maps represent one popular way of explaining model decisions by highlighting image regions models deem important when making a prediction. However, examining salience maps at scale is not practical. In this paper, we propose five novel methods of leveraging model salience to explain a model behavior at scale. These methods ask: (a) what is the average entropy for a model's salience maps, (b) how does model salience change when fed out-of-set samples, (c) how closely does model salience follow geometrical transformations, (d) what is the stability of model salience across independent training runs, and (e) how does model salience react to salience-guided image degradations. To assess the proposed measures on a concrete and topical problem, we conducted a series of experiments for the task of synthetic face detection with two types of models: those trained traditionally with cross-entropy loss, and those guided by human salience when training to increase model generalizability. These two types of models are characterized by different, interpretable properties of their salience maps, which allows for the evaluation of the correctness of the proposed measures. We offer source codes for each measure along with this paper. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 353,088 |
2402.06994 | A Change Detection Reality Check | In recent years, there has been an explosion of proposed change detection deep learning architectures in the remote sensing literature. These approaches claim to offer state-of-the-art performance on different standard benchmark datasets. However, has the field truly made significant progress? In this paper we perform experiments which conclude a simple U-Net segmentation baseline without training tricks or complicated architectural changes is still a top performer for the task of change detection. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 428,525 |
2105.13504 | Lattice partition recovery with dyadic CART | We study piece-wise constant signals corrupted by additive Gaussian noise over a $d$-dimensional lattice. Data of this form naturally arise in a host of applications, and the tasks of signal detection or testing, de-noising and estimation have been studied extensively in the statistical and signal processing literature. In this paper we consider instead the problem of partition recovery, i.e.~of estimating the partition of the lattice induced by the constancy regions of the unknown signal, using the computationally-efficient dyadic classification and regression tree (DCART) methodology proposed by \citep{donoho1997cart}. We prove that, under appropriate regularity conditions on the shape of the partition elements, a DCART-based procedure consistently estimates the underlying partition at a rate of order $\sigma^2 k^* \log (N)/\kappa^2$, where $k^*$ is the minimal number of rectangular sub-graphs obtained using recursive dyadic partitions supporting the signal partition, $\sigma^2$ is the noise variance, $\kappa$ is the minimal magnitude of the signal difference among contiguous elements of the partition and $N$ is the size of the lattice. Furthermore, under stronger assumptions, our method attains a sharper estimation error of order $\sigma^2\log(N)/\kappa^2$, independent of $k^*$, which we show to be minimax rate optimal. Our theoretical guarantees further extend to the partition estimator based on the optimal regression tree estimator (ORT) of \cite{chatterjee2019adaptive} and to the one obtained through an NP-hard exhaustive search method. We corroborate our theoretical findings and the effectiveness of DCART for partition recovery in simulations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 237,322 |
2411.18811 | NewsEdits 2.0: Learning the Intentions Behind Updating News | As events progress, news articles often update with new information: if we are not cautious, we risk propagating outdated facts. In this work, we hypothesize that linguistic features indicate factual fluidity, and that we can predict which facts in a news article will update using solely the text of a news article (i.e. not external resources like search engines). We test this hypothesis, first, by isolating fact-updates in large news revisions corpora. News articles may update for many reasons (e.g. factual, stylistic, narrative). We introduce the NewsEdits 2.0 taxonomy, an edit-intentions schema that separates fact updates from stylistic and narrative updates in news writing. We annotate over 9,200 pairs of sentence revisions and train high-scoring ensemble models to apply this schema. Then, taking a large dataset of silver-labeled pairs, we show that we can predict when facts will update in older article drafts with high precision. Finally, to demonstrate the usefulness of these findings, we construct a language model question asking (LLM-QA) abstention task. We wish the LLM to abstain from answering questions when information is likely to become outdated. Using our predictions, we show, LLM absention reaches near oracle levels of accuracy. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | true | 512,011 |
1611.01761 | High-Fidelity Model Order Reduction for Microgrids Stability Assessment | Proper modeling of inverter-based microgrids is crucial for accurate assessment of stability boundaries. It has been recently realized that the stability conditions for such microgrids are significantly different from those known for large- scale power systems. While detailed models are available, they are both computationally expensive and can not provide the insight into the instability mechanisms and factors. In this paper, a computationally efficient and accurate reduced-order model is proposed for modeling the inverter-based microgrids. The main factors affecting microgrid stability are analyzed using the developed reduced-order model and are shown to be unique for the microgrid-based network, which has no direct analogy to large-scale power systems. Particularly, it has been discovered that the stability limits for the conventional droop-based system (omega - P/V - Q) are determined by the ratio of inverter rating to network capacity, leading to a smaller stability region for microgrids with shorter lines. The theoretical derivation has been provided to verify the above investigation based on both the simplified and generalized network configurations. More impor- tantly, the proposed reduced-order model not only maintains the modeling accuracy but also enhances the computation efficiency. Finally, the results are verified with the detailed model via both frequency and time domain analyses. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 63,436 |
2303.03369 | Multimodal Prompting with Missing Modalities for Visual Recognition | In this paper, we tackle two challenges in multimodal learning for visual recognition: 1) when missing-modality occurs either during training or testing in real-world situations; and 2) when the computation resources are not available to finetune on heavy transformer models. To this end, we propose to utilize prompt learning and mitigate the above two challenges together. Specifically, our modality-missing-aware prompts can be plugged into multimodal transformers to handle general missing-modality cases, while only requiring less than 1% learnable parameters compared to training the entire model. We further explore the effect of different prompt configurations and analyze the robustness to missing modality. Extensive experiments are conducted to show the effectiveness of our prompt learning framework that improves the performance under various missing-modality cases, while alleviating the requirement of heavy model re-training. Code is available. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 349,700 |
2110.09869 | User-Centric Federated Learning | Data heterogeneity across participating devices poses one of the main challenges in federated learning as it has been shown to greatly hamper its convergence time and generalization capabilities. In this work, we address this limitation by enabling personalization using multiple user-centric aggregation rules at the parameter server. Our approach potentially produces a personalized model for each user at the cost of some extra downlink communication overhead. To strike a trade-off between personalization and communication efficiency, we propose a broadcast protocol that limits the number of personalized streams while retaining the essential advantages of our learning scheme. Through simulation results, our approach is shown to enjoy higher personalization capabilities, faster convergence, and better communication efficiency compared to other competing baseline solutions. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 261,956 |
1810.08293 | CURE-OR: Challenging Unreal and Real Environments for Object Recognition | In this paper, we introduce a large-scale, controlled, and multi-platform object recognition dataset denoted as Challenging Unreal and Real Environments for Object Recognition (CURE-OR). In this dataset, there are 1,000,000 images of 100 objects with varying size, color, and texture that are positioned in five different orientations and captured using five devices including a webcam, a DSLR, and three smartphone cameras in real-world (real) and studio (unreal) environments. The controlled challenging conditions include underexposure, overexposure, blur, contrast, dirty lens, image noise, resizing, and loss of color information. We utilize CURE-OR dataset to test recognition APIs-Amazon Rekognition and Microsoft Azure Computer Vision- and show that their performance significantly degrades under challenging conditions. Moreover, we investigate the relationship between object recognition and image quality and show that objective quality algorithms can estimate recognition performance under certain photometric challenging conditions. The dataset is publicly available at https://ghassanalregib.com/cure-or/. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 110,788 |
2002.06507 | Coordinate-free Circumnavigation of a Moving Target via a PD-like
Controller | This paper proposes a coordinate-free controller for a nonholonomic vehicle to circumnavigate a fully-actuated moving target by using range-only measurements. If the range rate is available, our Proportional Derivative (PD)-like controller has a simple structure as the standard PD controller, except the design of an additive constant bias and a saturation function in the error feedback. We show that if the target is stationary, the vehicle asymptotically encloses the target with a predefined radius at an exponential convergence rate, i.e., an exact circumnavigation pattern can be completed. For a moving target, the circumnavigation error converges to a small region whose size is shown proportional to the maneuverability of the target, e.g., the maximum linear speed and acceleration. Moreover, we design a second-order sliding mode (SOSM) filter to estimate the range rate and show that the SOSM filter can recover the range rate in a finite time. Finally, the effectiveness and advantages of our controller are validated via both numerical simulations and real experiments. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 164,214 |
2402.04084 | Provably learning a multi-head attention layer | The multi-head attention layer is one of the key components of the transformer architecture that sets it apart from traditional feed-forward models. Given a sequence length $k$, attention matrices $\mathbf{\Theta}_1,\ldots,\mathbf{\Theta}_m\in\mathbb{R}^{d\times d}$, and projection matrices $\mathbf{W}_1,\ldots,\mathbf{W}_m\in\mathbb{R}^{d\times d}$, the corresponding multi-head attention layer $F: \mathbb{R}^{k\times d}\to \mathbb{R}^{k\times d}$ transforms length-$k$ sequences of $d$-dimensional tokens $\mathbf{X}\in\mathbb{R}^{k\times d}$ via $F(\mathbf{X}) \triangleq \sum^m_{i=1} \mathrm{softmax}(\mathbf{X}\mathbf{\Theta}_i\mathbf{X}^\top)\mathbf{X}\mathbf{W}_i$. In this work, we initiate the study of provably learning a multi-head attention layer from random examples and give the first nontrivial upper and lower bounds for this problem: - Provided $\{\mathbf{W}_i, \mathbf{\Theta}_i\}$ satisfy certain non-degeneracy conditions, we give a $(dk)^{O(m^3)}$-time algorithm that learns $F$ to small error given random labeled examples drawn uniformly from $\{\pm 1\}^{k\times d}$. - We prove computational lower bounds showing that in the worst case, exponential dependence on $m$ is unavoidable. We focus on Boolean $\mathbf{X}$ to mimic the discrete nature of tokens in large language models, though our techniques naturally extend to standard continuous settings, e.g. Gaussian. Our algorithm, which is centered around using examples to sculpt a convex body containing the unknown parameters, is a significant departure from existing provable algorithms for learning feedforward networks, which predominantly exploit algebraic and rotation invariance properties of the Gaussian distribution. In contrast, our analysis is more flexible as it primarily relies on various upper and lower tail bounds for the input distribution and "slices" thereof. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 427,326 |
1610.04273 | Generalized and Extended Product Codes | Generalized Product (GPC) Codes, an unification of Product Codes and Integrated Interleaved (II) Codes, are presented. Applications for approaches requiring local and global parities are described. The more general problem of extending product codes by adding global parities is studied and an upper bound on the minimum distance of such codes is obtained. Codes with one, two and three global parities whose minimum distances meet the bound are presented. Tradeoffs between optimality and field size are discussed. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 62,366 |
cmp-lg/9606026 | An Efficient Compiler for Weighted Rewrite Rules | Context-dependent rewrite rules are used in many areas of natural language and speech processing. Work in computational phonology has demonstrated that, given certain conditions, such rewrite rules can be represented as finite-state transducers (FSTs). We describe a new algorithm for compiling rewrite rules into FSTs. We show the algorithm to be simpler and more efficient than existing algorithms. Further, many of our applications demand the ability to compile weighted rules into weighted FSTs, transducers generalized by providing transitions with weights. We have extended the algorithm to allow for this. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 536,592 |
1808.09166 | Removing out-of-focus blur from a single image | Reproducing an all-in-focus image from an image with defocus regions is of practical value in many applications, eg, digital photography, and robotics. Using the output of some existing defocus map estimator, existing approaches first segment a de-focused image into multiple regions blurred by Gaussian kernels with different variance each, and then de-blur each region using the corresponding Gaussian kernel. In this paper, we proposed a blind deconvolution method specifically designed for removing defocus blurring from an image, by providing effective solutions to two critical problems: 1) suppressing the artifacts caused by segmentation error by introducing an additional variable regularized by weighted $\ell_0$-norm; and 2) more accurate defocus kernel estimation using non-parametric symmetry and low-rank based constraints on the kernel. The experiments on real datasets showed the advantages of the proposed method over existing ones, thanks to the effective treatments of the two important issues mentioned above during deconvolution. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 106,130 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.