id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2405.11935 | A Flat Dual-Polarized Millimeter-Wave Luneburg Lens Antenna Using
Transformation Optics with Reduced Anisotropy and Impedance Mismatch | In this paper, a compact wideband dual-polarized Luneburg lens antenna (LLA) with reduced anisotropy and improved impedance matching is proposed in Ka band with a wide 2D beamscanning capability. Based on transformation optics, the spherical Luneburg lens is compressed into a cylindrical one, while the merits of high gain, broad band, wide scanning, and free polarization are preserved. A trigonometric function is employed to the material property of the flattened Luneburg lens with reduced anisotropy, thus effectively alleviates the strong reflection, the high sidelobes and back radiation with a free cost on the antenna weight and volume. Furthermore, a light thin wideband 7-by-1 metasurface phased array is studied as the primary feed for the LLA. The proposed metantenna, shorted for metamaterial-based antenna, has a high potential for B5G, future wireless communication and radar sensing as an onboard system. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 455,350 |
1602.01061 | Waveform Optimization for SWIPT with Nonlinear Energy Harvester Modeling | Simultaneous Wireless Information and Power Transfer (SWIPT) has attracted significant attention in the communication community. The problem of waveform design for SWIPT has however never been addressed so far. In this paper, a novel SWIPT transceiver architecture is introduced relying on the superposition of multisine and OFDM waveforms at the transmitter and a power-splitter receiver equipped with an energy harvester and an information decoder capable of cancelling the multisine waveforms. The SWIPT multisine/OFDM waveforms are optimized so as to maximize the rate-energy region of the whole system. They are adaptive to the channel state information and result from a posynomial maximization problem that originates from the non-linearity of the energy harvester. Numerical results illustrate the performance of the derived waveforms and SWIPT architecture. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 51,656 |
2101.04097 | Correlated Weights in Infinite Limits of Deep Convolutional Neural
Networks | Infinite width limits of deep neural networks often have tractable forms. They have been used to analyse the behaviour of finite networks, as well as being useful methods in their own right. When investigating infinitely wide convolutional neural networks (CNNs), it was observed that the correlations arising from spatial weight sharing disappear in the infinite limit. This is undesirable, as spatial correlation is the main motivation behind CNNs. We show that the loss of this property is not a consequence of the infinite limit, but rather of choosing an independent weight prior. Correlating the weights maintains the correlations in the activations. Varying the amount of correlation interpolates between independent-weight limits and mean-pooling. Empirical evaluation of the infinitely wide network shows that optimal performance is achieved between the extremes, indicating that correlations can be useful. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 215,066 |
2209.11302 | ProgPrompt: Generating Situated Robot Task Plans using Large Language
Models | Task planning can require defining myriad domain knowledge about the world in which a robot needs to act. To ameliorate that effort, large language models (LLMs) can be used to score potential next actions during task planning, and even generate action sequences directly, given an instruction in natural language with no additional domain information. However, such methods either require enumerating all possible next steps for scoring, or generate free-form text that may contain actions not possible on a given robot in its current context. We present a programmatic LLM prompt structure that enables plan generation functional across situated environments, robot capabilities, and tasks. Our key insight is to prompt the LLM with program-like specifications of the available actions and objects in an environment, as well as with example programs that can be executed. We make concrete recommendations about prompt structure and generation constraints through ablation experiments, demonstrate state of the art success rates in VirtualHome household tasks, and deploy our method on a physical robot arm for tabletop tasks. Website at progprompt.github.io | false | false | false | false | true | false | true | true | true | false | false | false | false | false | false | false | false | false | 319,139 |
1204.4107 | Towards the Evolution of Vertical-Axis Wind Turbines using Supershapes | We have recently presented an initial study of evolutionary algorithms used to design vertical-axis wind turbines (VAWTs) wherein candidate prototypes are evaluated under approximated wind tunnel conditions after being physically instantiated by a 3D printer. That is, unlike other approaches such as computational fluid dynamics simulations, no mathematical formulations are used and no model assumptions are made. However, the representation used significantly restricted the range of morphologies explored. In this paper, we present initial explorations into the use of a simple generative encoding, known as Gielis superformula, that produces a highly flexible 3D shape representation to design VAWT. First, the target-based evolution of 3D artefacts is investigated and subsequently initial design experiments are performed wherein each VAWT candidate is physically instantiated and evaluated under approximated wind tunnel conditions. It is shown possible to produce very closely matching designs of a number of 3D objects through the evolution of supershapes produced by Gielis superformula. Moreover, it is shown possible to use artificial physical evolution to identify novel and increasingly efficient supershape VAWT designs. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | true | 15,565 |
1911.07983 | Task-Based Hybrid Shared Control for Training Through Forceful
Interaction | Despite the fact that robotic platforms can provide both consistent practice and objective assessments of users over the course of their training, there are relatively few instances where physical human robot interaction has been significantly more effective than unassisted practice or human-mediated training. This paper describes a hybrid shared control robot, which enhances task learning through kinesthetic feedback. The assistance assesses user actions using a task-specific evaluation criterion and selectively accepts or rejects them at each time instant. Through two human subject studies (total n=68), we show that this hybrid approach of switching between full transparency and full rejection of user inputs leads to increased skill acquisition and short-term retention compared to unassisted practice. Moreover, we show that the shared control paradigm exhibits features previously shown to promote successful training. It avoids user passivity by only rejecting user actions and allowing failure at the task. It improves performance during assistance, providing meaningful task-specific feedback. It is sensitive to initial skill of the user and behaves as an `assist-as-needed' control scheme---adapting its engagement in real time based on the performance and needs of the user. Unlike other successful algorithms, it does not require explicit modulation of the level of impedance or error amplification during training and it is permissive to a range of strategies because of its evaluation criterion. We demonstrate that the proposed hybrid shared control paradigm with a task-based minimal intervention criterion significantly enhances task-specific training. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 154,042 |
1905.11067 | Locally Differentially Private Minimum Finding | We investigate a problem of finding the minimum, in which each user has a real value and we want to estimate the minimum of these values under the local differential privacy constraint. We reveal that this problem is fundamentally difficult, and we cannot construct a mechanism that is consistent in the worst case. Instead of considering the worst case, we aim to construct a private mechanism whose error rate is adaptive to the easiness of estimation of the minimum. As a measure of easiness, we introduce a parameter $\alpha$ that characterizes the fatness of the minimum-side tail of the user data distribution. As a result, we reveal that the mechanism can achieve $O((\ln^6N/\epsilon^2N)^{1/2\alpha})$ error without knowledge of $\alpha$ and the error rate is near-optimal in the sense that any mechanism incurs $\Omega((1/\epsilon^2N)^{1/2\alpha})$ error. Furthermore, we demonstrate that our mechanism outperforms a naive mechanism by empirical evaluations on synthetic datasets. Also, we conducted experiments on the MovieLens dataset and a purchase history dataset and demonstrate that our algorithm achieves $\tilde{O}((1/N)^{1/2\alpha})$ error adaptively to $\alpha$. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 132,321 |
2205.12183 | StylizedNeRF: Consistent 3D Scene Stylization as Stylized NeRF via 2D-3D
Mutual Learning | 3D scene stylization aims at generating stylized images of the scene from arbitrary novel views following a given set of style examples, while ensuring consistency when rendered from different views. Directly applying methods for image or video stylization to 3D scenes cannot achieve such consistency. Thanks to recently proposed neural radiance fields (NeRF), we are able to represent a 3D scene in a consistent way. Consistent 3D scene stylization can be effectively achieved by stylizing the corresponding NeRF. However, there is a significant domain gap between style examples which are 2D images and NeRF which is an implicit volumetric representation. To address this problem, we propose a novel mutual learning framework for 3D scene stylization that combines a 2D image stylization network and NeRF to fuse the stylization ability of 2D stylization network with the 3D consistency of NeRF. We first pre-train a standard NeRF of the 3D scene to be stylized and replace its color prediction module with a style network to obtain a stylized NeRF. It is followed by distilling the prior knowledge of spatial consistency from NeRF to the 2D stylization network through an introduced consistency loss. We also introduce a mimic loss to supervise the mutual learning of the NeRF style module and fine-tune the 2D stylization decoder. In order to further make our model handle ambiguities of 2D stylization results, we introduce learnable latent codes that obey the probability distributions conditioned on the style. They are attached to training samples as conditional inputs to better learn the style module in our novel stylized NeRF. Experimental results demonstrate that our method is superior to existing approaches in both visual quality and long-range consistency. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 298,432 |
1703.01423 | Soft Pneumatic Gelatin Actuator for Edible Robotics | We present a fully edible pneumatic actuator based on gelatin-glycerol composite. The actuator is monolithic, fabricated via a molding process, and measures 90 mm in length, 20 mm in width, and 17 mm in thickness. Thanks to the composite mechanical characteristics similar to those of silicone elastomers, the actuator exhibits a bending angle of 170.3 {\deg} and a blocked force of 0.34 N at the applied pressure of 25 kPa. These values are comparable to elastomer based pneumatic actuators. As a validation example, two actuators are integrated to form a gripper capable of handling various objects, highlighting the high performance and applicability of the edible actuator. These edible actuators, combined with other recent edible materials and electronics, could lay the foundation for a new type of edible robots. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 69,360 |
1110.5015 | Spectral descriptors for deformable shapes | Informative and discriminative feature descriptors play a fundamental role in deformable shape analysis. For example, they have been successfully employed in correspondence, registration, and retrieval tasks. In the recent years, significant attention has been devoted to descriptors obtained from the spectral decomposition of the Laplace-Beltrami operator associated with the shape. Notable examples in this family are the heat kernel signature (HKS) and the wave kernel signature (WKS). Laplacian-based descriptors achieve state-of-the-art performance in numerous shape analysis tasks; they are computationally efficient, isometry-invariant by construction, and can gracefully cope with a variety of transformations. In this paper, we formulate a generic family of parametric spectral descriptors. We argue that in order to be optimal for a specific task, the descriptor should take into account the statistics of the corpus of shapes to which it is applied (the "signal") and those of the class of transformations to which it is made insensitive (the "noise"). While such statistics are hard to model axiomatically, they can be learned from examples. Following the spirit of the Wiener filter in signal processing, we show a learning scheme for the construction of optimal spectral descriptors and relate it to Mahalanobis metric learning. The superiority of the proposed approach is demonstrated on the SHREC'10 benchmark. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 12,741 |
2310.03392 | Unpacking Human-AI Interaction in Safety-Critical Industries: A
Systematic Literature Review | Ensuring quality human-AI interaction (HAII) in safety-critical industries is essential. Failure to do so can lead to catastrophic and deadly consequences. Despite this urgency, existing research on HAII is limited, fragmented, and inconsistent. We present here a survey of that literature and recommendations for research best practices that should improve the field. We divided our investigation into the following areas: 1) terms used to describe HAII, 2) primary roles of AI-enabled systems, 3) factors that influence HAII, and 4) how HAII is measured. Additionally, we described the capabilities and maturity of the AI-enabled systems used in safety-critical industries discussed in these articles. We found that no single term is used across the literature to describe HAII and some terms have multiple meanings. According to our literature, seven factors influence HAII: user characteristics (e.g., user personality), user perceptions and attitudes (e.g., user biases), user expectations and experience (e.g., mismatched user expectations and experience), AI interface and features (e.g., interactive design), AI output (e.g., perceived accuracy), explainability and interpretability (e.g., level of detail, user understanding), and usage of AI (e.g., heterogeneity of environments). HAII is most measured with user-related subjective metrics (e.g., user perceptions, trust, and attitudes), and AI-assisted decision-making is the most common primary role of AI-enabled systems. Based on this review, we conclude that there are substantial research gaps in HAII. Researchers and developers need to codify HAII terminology, involve users throughout the AI lifecycle (especially during development), and tailor HAII in safety-critical industries to the users and environments. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 397,268 |
1803.03684 | Scoring Formulation for Multi-Condition Joint PLDA | The joint PLDA model, is a generalization of PLDA where the nuisance variable is no longer considered independent across samples, but potentially shared (tied) across samples that correspond to the same nuisance condition. The original work considered a single nuisance condition, deriving the EM and scoring formulas for this scenario. In this document, we show how to obtain likelihood ratios for scoring when multiple nuisance conditions are allowed in the model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 92,297 |
0811.0134 | A Novel Parser Design Algorithm Based on Artificial Ants | This article presents a unique design for a parser using the Ant Colony Optimization algorithm. The paper implements the intuitive thought process of human mind through the activities of artificial ants. The scheme presented here uses a bottom-up approach and the parsing program can directly use ambiguous or redundant grammars. We allocate a node corresponding to each production rule present in the given grammar. Each node is connected to all other nodes (representing other production rules), thereby establishing a completely connected graph susceptible to the movement of artificial ants. Each ant tries to modify this sentential form by the production rule present in the node and upgrades its position until the sentential form reduces to the start symbol S. Successful ants deposit pheromone on the links that they have traversed through. Eventually, the optimum path is discovered by the links carrying maximum amount of pheromone concentration. The design is simple, versatile, robust and effective and obviates the calculation of the above mentioned sets and precedence relation tables. Further advantages of our scheme lie in i) ascertaining whether a given string belongs to the language represented by the grammar, and ii) finding out the shortest possible path from the given string to the start symbol S in case multiple routes exist. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 2,598 |
2402.08493 | Sparsity via Sparse Group $k$-max Regularization | For the linear inverse problem with sparsity constraints, the $l_0$ regularized problem is NP-hard, and existing approaches either utilize greedy algorithms to find almost-optimal solutions or to approximate the $l_0$ regularization with its convex counterparts. In this paper, we propose a novel and concise regularization, namely the sparse group $k$-max regularization, which can not only simultaneously enhance the group-wise and in-group sparsity, but also casts no additional restraints on the magnitude of variables in each group, which is especially important for variables at different scales, so that it approximate the $l_0$ norm more closely. We also establish an iterative soft thresholding algorithm with local optimality conditions and complexity analysis provided. Through numerical experiments on both synthetic and real-world datasets, we verify the effectiveness and flexibility of the proposed method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 429,118 |
2311.00444 | Form follows Function: Text-to-Text Conditional Graph Generation based
on Functional Requirements | This work focuses on the novel problem setting of generating graphs conditioned on a description of the graph's functional requirements in a downstream task. We pose the problem as a text-to-text generation problem and focus on the approach of fine-tuning a pretrained large language model (LLM) to generate graphs. We propose an inductive bias which incorporates information about the structure of the graph into the LLM's generation process by incorporating message passing layers into an LLM's architecture. To evaluate our proposed method, we design a novel set of experiments using publicly available and widely studied molecule and knowledge graph data sets. Results suggest our proposed approach generates graphs which more closely meet the requested functional requirements, outperforming baselines developed on similar tasks by a statistically significant margin. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 404,655 |
2408.08088 | KGV: Integrating Large Language Models with Knowledge Graphs for Cyber
Threat Intelligence Credibility Assessment | Cyber threat intelligence is a critical tool that many organizations and individuals use to protect themselves from sophisticated, organized, persistent, and weaponized cyber attacks. However, few studies have focused on the quality assessment of threat intelligence provided by intelligence platforms, and this work still requires manual analysis by cybersecurity experts. In this paper, we propose a knowledge graph-based verifier, a novel Cyber Threat Intelligence (CTI) quality assessment framework that combines knowledge graphs and Large Language Models (LLMs). Our approach introduces LLMs to automatically extract OSCTI key claims to be verified and utilizes a knowledge graph consisting of paragraphs for fact-checking. This method differs from the traditional way of constructing complex knowledge graphs with entities as nodes. By constructing knowledge graphs with paragraphs as nodes and semantic similarity as edges, it effectively enhances the semantic understanding ability of the model and simplifies labeling requirements. Additionally, to fill the gap in the research field, we created and made public the first dataset for threat intelligence assessment from heterogeneous sources. To the best of our knowledge, this work is the first to create a dataset on threat intelligence reliability verification, providing a reference for future research. Experimental results show that KGV (Knowledge Graph Verifier) significantly improves the performance of LLMs in intelligence quality assessment. Compared with traditional methods, we reduce a large amount of data annotation while the model still exhibits strong reasoning capabilities. Finally, our method can achieve XXX accuracy in network threat assessment. | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | 480,853 |
2008.09566 | Differentiable TAN Structure Learning for Bayesian Network Classifiers | Learning the structure of Bayesian networks is a difficult combinatorial optimization problem. In this paper, we consider learning of tree-augmented naive Bayes (TAN) structures for Bayesian network classifiers with discrete input features. Instead of performing a combinatorial optimization over the space of possible graph structures, the proposed method learns a distribution over graph structures. After training, we select the most probable structure of this distribution. This allows for a joint training of the Bayesian network parameters along with its TAN structure using gradient-based optimization. The proposed method is agnostic to the specific loss and only requires that it is differentiable. We perform extensive experiments using a hybrid generative-discriminative loss based on the discriminative probabilistic margin. Our method consistently outperforms random TAN structures and Chow-Liu TAN structures. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 192,751 |
2212.04559 | SpeechLMScore: Evaluating speech generation using speech language model | While human evaluation is the most reliable metric for evaluating speech generation systems, it is generally costly and time-consuming. Previous studies on automatic speech quality assessment address the problem by predicting human evaluation scores with machine learning models. However, they rely on supervised learning and thus suffer from high annotation costs and domain-shift problems. We propose SpeechLMScore, an unsupervised metric to evaluate generated speech using a speech-language model. SpeechLMScore computes the average log-probability of a speech signal by mapping it into discrete tokens and measures the average probability of generating the sequence of tokens. Therefore, it does not require human annotation and is a highly scalable framework. Evaluation results demonstrate that the proposed metric shows a promising correlation with human evaluation scores on different speech generation tasks including voice conversion, text-to-speech, and speech enhancement. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 335,482 |
2201.05371 | Artificial Intelligence in Software Testing : Impact, Problems,
Challenges and Prospect | Artificial Intelligence (AI) is making a significant impact in multiple areas like medical, military, industrial, domestic, law, arts as AI is capable to perform several roles such as managing smart factories, driving autonomous vehicles, creating accurate weather forecasts, detecting cancer and personal assistants, etc. Software testing is the process of putting the software to test for some abnormal behaviour of the software. Software testing is a tedious, laborious and most time-consuming process. Automation tools have been developed that help to automate some activities of the testing process to enhance quality and timely delivery. Over time with the inclusion of continuous integration and continuous delivery (CI/CD) pipeline, automation tools are becoming less effective. The testing community is turning to AI to fill the gap as AI is able to check the code for bugs and errors without any human intervention and in a much faster way than humans. In this study, we aim to recognize the impact of AI technologies on various software testing activities or facets in the STLC. Further, the study aims to recognize and explain some of the biggest challenges software testers face while applying AI to testing. The paper also proposes some key contributions of AI in the future to the domain of software testing. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 275,373 |
2105.11113 | Dynamic Class Queue for Large Scale Face Recognition In the Wild | Learning discriminative representation using large-scale face datasets in the wild is crucial for real-world applications, yet it remains challenging. The difficulties lie in many aspects and this work focus on computing resource constraint and long-tailed class distribution. Recently, classification-based representation learning with deep neural networks and well-designed losses have demonstrated good recognition performance. However, the computing and memory cost linearly scales up to the number of identities (classes) in the training set, and the learning process suffers from unbalanced classes. In this work, we propose a dynamic class queue (DCQ) to tackle these two problems. Specifically, for each iteration during training, a subset of classes for recognition are dynamically selected and their class weights are dynamically generated on-the-fly which are stored in a queue. Since only a subset of classes is selected for each iteration, the computing requirement is reduced. By using a single server without model parallel, we empirically verify in large-scale datasets that 10% of classes are sufficient to achieve similar performance as using all classes. Moreover, the class weights are dynamically generated in a few-shot manner and therefore suitable for tail classes with only a few instances. We show clear improvement over a strong baseline in the largest public dataset Megaface Challenge2 (MF2) which has 672K identities and over 88% of them have less than 10 instances. Code is available at https://github.com/bilylee/DCQ | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 236,600 |
1904.05878 | Knowledge Flow: Improve Upon Your Teachers | A zoo of deep nets is available these days for almost any given task, and it is increasingly unclear which net to start with when addressing a new task, or which net to use as an initialization for fine-tuning a new model. To address this issue, in this paper, we develop knowledge flow which moves 'knowledge' from multiple deep nets, referred to as teachers, to a new deep net model, called the student. The structure of the teachers and the student can differ arbitrarily and they can be trained on entirely different tasks with different output spaces too. Upon training with knowledge flow the student is independent of the teachers. We demonstrate our approach on a variety of supervised and reinforcement learning tasks, outperforming fine-tuning and other 'knowledge exchange' methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 127,430 |
2108.04927 | Embodied BERT: A Transformer Model for Embodied, Language-guided Visual
Task Completion | Language-guided robots performing home and office tasks must navigate in and interact with the world. Grounding language instructions against visual observations and actions to take in an environment is an open challenge. We present Embodied BERT (EmBERT), a transformer-based model which can attend to high-dimensional, multi-modal inputs across long temporal horizons for language-conditioned task completion. Additionally, we bridge the gap between successful object-centric navigation models used for non-interactive agents and the language-guided visual task completion benchmark, ALFRED, by introducing object navigation targets for EmBERT training. We achieve competitive performance on the ALFRED benchmark, and EmBERT marks the first transformer-based model to successfully handle the long-horizon, dense, multi-modal histories of ALFRED, and the first ALFRED model to utilize object-centric navigation targets. | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | false | 250,152 |
1709.05437 | A Causal And-Or Graph Model for Visibility Fluent Reasoning in Tracking
Interacting Objects | Tracking humans that are interacting with the other subjects or environment remains unsolved in visual tracking, because the visibility of the human of interests in videos is unknown and might vary over time. In particular, it is still difficult for state-of-the-art human trackers to recover complete human trajectories in crowded scenes with frequent human interactions. In this work, we consider the visibility status of a subject as a fluent variable, whose change is mostly attributed to the subject's interaction with the surrounding, e.g., crossing behind another object, entering a building, or getting into a vehicle, etc. We introduce a Causal And-Or Graph (C-AOG) to represent the causal-effect relations between an object's visibility fluent and its activities, and develop a probabilistic graph model to jointly reason the visibility fluent change (e.g., from visible to invisible) and track humans in videos. We formulate this joint task as an iterative search of a feasible causal graph structure that enables fast search algorithm, e.g., dynamic programming method. We apply the proposed method on challenging video sequences to evaluate its capabilities of estimating visibility fluent changes of subjects and tracking subjects of interests over time. Results with comparisons demonstrate that our method outperforms the alternative trackers and can recover complete trajectories of humans in complicated scenarios with frequent human interactions. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 80,868 |
1212.0220 | Metaheuristic Optimization: Algorithm Analysis and Open Problems | Metaheuristic algorithms are becoming an important part of modern optimization. A wide range of metaheuristic algorithms have emerged over the last two decades, and many metaheuristics such as particle swarm optimization are becoming increasingly popular. Despite their popularity, mathematical analysis of these algorithms lacks behind. Convergence analysis still remains unsolved for the majority of metaheuristic algorithms, while efficiency analysis is equally challenging. In this paper, we intend to provide an overview of convergence and efficiency studies of metaheuristics, and try to provide a framework for analyzing metaheuristics in terms of convergence and efficiency. This can form a basis for analyzing other algorithms. We also outline some open questions as further research topics. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 20,077 |
0905.3030 | Performance of Cognitive Radio Systems with Imperfect Radio Environment
Map Information | In this paper we describe the effect of imperfections in the radio environment map (REM) information on the performance of cognitive radio (CR) systems. Via simulations we explore the relationship between the required precision of the REM and various channel/system properties. For example, the degree of spatial correlation in the shadow fading is a key factor as is the interference constraint employed by the primary user. Based on the CR interferers obtained from the simulations, we characterize the temporal behavior of such systems by computing the level crossing rates (LCRs) of the cumulative interference represented by these CRs. This evaluates the effect of short term fluctuations above acceptable interference levels due to the fast fading. We derive analytical formulae for the LCRs in Rayleigh and Rician fast fading conditions. The analytical results are verified by Monte Carlo simulations. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 3,722 |
2212.01757 | Languages You Know Influence Those You Learn: Impact of Language
Characteristics on Multi-Lingual Text-to-Text Transfer | Multi-lingual language models (LM), such as mBERT, XLM-R, mT5, mBART, have been remarkably successful in enabling natural language tasks in low-resource languages through cross-lingual transfer from high-resource ones. In this work, we try to better understand how such models, specifically mT5, transfer *any* linguistic and semantic knowledge across languages, even though no explicit cross-lingual signals are provided during pre-training. Rather, only unannotated texts from each language are presented to the model separately and independently of one another, and the model appears to implicitly learn cross-lingual connections. This raises several questions that motivate our study, such as: Are the cross-lingual connections between every language pair equally strong? What properties of source and target language impact the strength of cross-lingual transfer? Can we quantify the impact of those properties on the cross-lingual transfer? In our investigation, we analyze a pre-trained mT5 to discover the attributes of cross-lingual connections learned by the model. Through a statistical interpretation framework over 90 language pairs across three tasks, we show that transfer performance can be modeled by a few linguistic and data-derived features. These observations enable us to interpret cross-lingual understanding of the mT5 model. Through these observations, one can favorably choose the best source language for a task, and can anticipate its training data demands. A key finding of this work is that similarity of syntax, morphology and phonology are good predictors of cross-lingual transfer, significantly more than just the lexical similarity of languages. For a given language, we are able to predict zero-shot performance, that increases on a logarithmic scale with the number of few-shot target language data points. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 334,561 |
1811.00183 | Designing an Effective Metric Learning Pipeline for Speaker Diarization | State-of-the-art speaker diarization systems utilize knowledge from external data, in the form of a pre-trained distance metric, to effectively determine relative speaker identities to unseen data. However, much of recent focus has been on choosing the appropriate feature extractor, ranging from pre-trained $i-$vectors to representations learned via different sequence modeling architectures (e.g. 1D-CNNs, LSTMs, attention models), while adopting off-the-shelf metric learning solutions. In this paper, we argue that, regardless of the feature extractor, it is crucial to carefully design a metric learning pipeline, namely the loss function, the sampling strategy and the discrimnative margin parameter, for building robust diarization systems. Furthermore, we propose to adopt a fine-grained validation process to obtain a comprehensive evaluation of the generalization power of metric learning pipelines. To this end, we measure diarization performance across different language speakers, and variations in the number of speakers in a recording. Using empirical studies, we provide interesting insights into the effectiveness of different design choices and make recommendations. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 112,024 |
2103.03938 | Causal Analysis of Agent Behavior for AI Safety | As machine learning systems become more powerful they also become increasingly unpredictable and opaque. Yet, finding human-understandable explanations of how they work is essential for their safe deployment. This technical report illustrates a methodology for investigating the causal mechanisms that drive the behaviour of artificial agents. Six use cases are covered, each addressing a typical question an analyst might ask about an agent. In particular, we show that each question cannot be addressed by pure observation alone, but instead requires conducting experiments with systematically chosen manipulations so as to generate the correct causal evidence. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 223,462 |
q-bio/0411030 | Statistical Mechanics Characterization of Neuronal Mosaics | The spatial distribution of neuronal cells is an important requirement for achieving proper neuronal function in several parts of the nervous system of most animals. For instance, specific distribution of photoreceptors and related neuronal cells, particularly the ganglion cells, in mammal's retina is required in order to properly sample the projected scene. This work presents how two concepts from the areas of statistical mechanics and complex systems, namely the \emph{lacunarity} and the \emph{multiscale entropy} (i.e. the entropy calculated over progressively diffused representations of the cell mosaic), have allowed effective characterization of the spatial distribution of retinal cells. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 540,844 |
2404.05365 | NLP Progress in Indigenous Latin American Languages | The paper focuses on the marginalization of indigenous language communities in the face of rapid technological advancements. We highlight the cultural richness of these languages and the risk they face of being overlooked in the realm of Natural Language Processing (NLP). We aim to bridge the gap between these communities and researchers, emphasizing the need for inclusive technological advancements that respect indigenous community perspectives. We show the NLP progress of indigenous Latin American languages and the survey that covers the status of indigenous languages in Latin America, their representation in NLP, and the challenges and innovations required for their preservation and development. The paper contributes to the current literature in understanding the need and progress of NLP for indigenous communities of Latin America, specifically low-resource and indigenous communities in general. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 445,061 |
1801.10287 | An Incremental Off-policy Search in a Model-free Markov Decision Process
Using a Single Sample Path | In this paper, we consider a modified version of the control problem in a model free Markov decision process (MDP) setting with large state and action spaces. The control problem most commonly addressed in the contemporary literature is to find an optimal policy which maximizes the value function, i.e., the long run discounted reward of the MDP. The current settings also assume access to a generative model of the MDP with the hidden premise that observations of the system behaviour in the form of sample trajectories can be obtained with ease from the model. In this paper, we consider a modified version, where the cost function is the expectation of a non-convex function of the value function without access to the generative model. Rather, we assume that a sample trajectory generated using a priori chosen behaviour policy is made available. In this restricted setting, we solve the modified control problem in its true sense, i.e., to find the best possible policy given this limited information. We propose a stochastic approximation algorithm based on the well-known cross entropy method which is data (sample trajectory) efficient, stable, robust as well as computationally and storage efficient. We provide a proof of convergence of our algorithm to a policy which is globally optimal relative to the behaviour policy. We also present experimental results to corroborate our claims and we demonstrate the superiority of the solution produced by our algorithm compared to the state-of-the-art algorithms under appropriately chosen behaviour policy. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 89,257 |
2410.01649 | shapiq: Shapley Interactions for Machine Learning | Originally rooted in game theory, the Shapley Value (SV) has recently become an important tool in machine learning research. Perhaps most notably, it is used for feature attribution and data valuation in explainable artificial intelligence. Shapley Interactions (SIs) naturally extend the SV and address its limitations by assigning joint contributions to groups of entities, which enhance understanding of black box machine learning models. Due to the exponential complexity of computing SVs and SIs, various methods have been proposed that exploit structural assumptions or yield probabilistic estimates given limited resources. In this work, we introduce shapiq, an open-source Python package that unifies state-of-the-art algorithms to efficiently compute SVs and any-order SIs in an application-agnostic framework. Moreover, it includes a benchmarking suite containing 11 machine learning applications of SIs with pre-computed games and ground-truth values to systematically assess computational performance across domains. For practitioners, shapiq is able to explain and visualize any-order feature interactions in predictions of models, including vision transformers, language models, as well as XGBoost and LightGBM with TreeSHAP-IQ. With shapiq, we extend shap beyond feature attributions and consolidate the application of SVs and SIs in machine learning that facilitates future research. The source code and documentation are available at https://github.com/mmschlk/shapiq. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 493,869 |
2112.00065 | Boosting EfficientNets Ensemble Performance via Pseudo-Labels and
Synthetic Images by pix2pixHD for Infection and Ischaemia Classification in
Diabetic Foot Ulcers | Diabetic foot ulcers are a common manifestation of lesions on the diabetic foot, a syndrome acquired as a long-term complication of diabetes mellitus. Accompanying neuropathy and vascular damage promote acquisition of pressure injuries and tissue death due to ischaemia. Affected areas are prone to infections, hindering the healing progress. The research at hand investigates an approach on classification of infection and ischaemia, conducted as part of the Diabetic Foot Ulcer Challenge (DFUC) 2021. Different models of the EfficientNet family are utilized in ensembles. An extension strategy for the training data is applied, involving pseudo-labeling for unlabeled images, and extensive generation of synthetic images via pix2pixHD to cope with severe class imbalances. The resulting extended training dataset features $8.68$ times the size of the baseline and shows a real to synthetic image ratio of $1:3$. Performances of models and ensembles trained on the baseline and extended training dataset are compared. Synthetic images featured a broad qualitative variety. Results show that models trained on the extended training dataset as well as their ensemble benefit from the large extension. F1-Scores for rare classes receive outstanding boosts, while those for common classes are either not harmed or boosted moderately. A critical discussion concretizes benefits and identifies limitations, suggesting improvements. The work concludes that classification performance of individual models as well as that of ensembles can be boosted utilizing synthetic images. Especially performance for rare classes benefits notably. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 269,024 |
1910.04456 | Breathing deformation model -- application to multi-resolution abdominal
MRI | Dynamic MRI is a technique of acquiring a series of images continuously to follow the physiological changes over time. However, such fast imaging results in low resolution images. In this work, abdominal deformation model computed from dynamic low resolution images have been applied to high resolution image, acquired previously, to generate dynamic high resolution MRI. Dynamic low resolution images were simulated into different breathing phases (inhale and exhale). Then, the image registration between breathing time points was performed using the B-spline SyN deformable model and using cross-correlation as a similarity metric. The deformation model between different breathing phases were estimated from highly undersampled data. This deformation model was then applied to the high resolution images to obtain high resolution images of different breathing phases. The results indicated that the deformation model could be computed from relatively very low resolution images. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 148,773 |
1307.0846 | Semi-supervised Ranking Pursuit | We propose a novel sparse preference learning/ranking algorithm. Our algorithm approximates the true utility function by a weighted sum of basis functions using the squared loss on pairs of data points, and is a generalization of the kernel matching pursuit method. It can operate both in a supervised and a semi-supervised setting and allows efficient search for multiple, near-optimal solutions. Furthermore, we describe the extension of the algorithm suitable for combined ranking and regression tasks. In our experiments we demonstrate that the proposed algorithm outperforms several state-of-the-art learning methods when taking into account unlabeled data and performs comparably in a supervised learning scenario, while providing sparser solutions. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 25,583 |
1907.06571 | Adversarial Video Generation on Complex Datasets | Generative models of natural images have progressed towards high fidelity samples by the strong leveraging of scale. We attempt to carry this success to the field of video modeling by showing that large Generative Adversarial Networks trained on the complex Kinetics-600 dataset are able to produce video samples of substantially higher complexity and fidelity than previous work. Our proposed model, Dual Video Discriminator GAN (DVD-GAN), scales to longer and higher resolution videos by leveraging a computationally efficient decomposition of its discriminator. We evaluate on the related tasks of video synthesis and video prediction, and achieve new state-of-the-art Fr\'echet Inception Distance for prediction for Kinetics-600, as well as state-of-the-art Inception Score for synthesis on the UCF-101 dataset, alongside establishing a strong baseline for synthesis on Kinetics-600. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 138,655 |
2407.18038 | TiCoSS: Tightening the Coupling between Semantic Segmentation and Stereo
Matching within A Joint Learning Framework | Semantic segmentation and stereo matching, respectively analogous to the ventral and dorsal streams in our human brain, are two key components of autonomous driving perception systems. Addressing these two tasks with separate networks is no longer the mainstream direction in developing computer vision algorithms, particularly with the recent advances in large vision models and embodied artificial intelligence. The trend is shifting towards combining them within a joint learning framework, especially emphasizing feature sharing between the two tasks. The major contributions of this study lie in comprehensively tightening the coupling between semantic segmentation and stereo matching. Specifically, this study introduces three novelties: (1) a tightly coupled, gated feature fusion strategy, (2) a hierarchical deep supervision strategy, and (3) a coupling tightening loss function. The combined use of these technical contributions results in TiCoSS, a state-of-the-art joint learning framework that simultaneously tackles semantic segmentation and stereo matching. Through extensive experiments on the KITTI and vKITTI2 datasets, along with qualitative and quantitative analyses, we validate the effectiveness of our developed strategies and loss function, and demonstrate its superior performance compared to prior arts, with a notable increase in mIoU by over 9%. Our source code will be publicly available at mias.group/TiCoSS upon publication. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 476,220 |
2409.14198 | A Sinkhorn Regularized Adversarial Network for Image Guided DEM
Super-resolution using Frequency Selective Hybrid Graph Transformer | Digital Elevation Model (DEM) is an essential aspect in the remote sensing (RS) domain to analyze various applications related to surface elevations. Here, we address the generation of high-resolution (HR) DEMs using HR multi-spectral (MX) satellite imagery as a guide by introducing a novel hybrid transformer model consisting of Densely connected Multi-Residual Block (DMRB) and multi-headed Frequency Selective Graph Attention (M-FSGA). To promptly regulate this process, we utilize the notion of discriminator spatial maps as the conditional attention to the MX guide. Further, we present a novel adversarial objective related to optimizing Sinkhorn distance with classical GAN. In this regard, we provide both theoretical and empirical substantiation of better performance in terms of vanishing gradient issues and numerical convergence. Based on our experiments on 4 different DEM datasets, we demonstrate both qualitative and quantitative comparisons with available baseline methods and show that the performance of our proposed model is superior to others with sharper details and minimal errors. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 490,369 |
2209.11225 | Quantum theory in finite dimension cannot explain every general process
with finite memory | Arguably, the largest class of stochastic processes generated by means of a finite memory consists of those that are sequences of observations produced by sequential measurements in a suitable generalized probabilistic theory (GPT). These are constructed from a finite-dimensional memory evolving under a set of possible linear maps, and with probabilities of outcomes determined by linear functions of the memory state. Examples of such models are given by classical hidden Markov processes, where the memory state is a probability distribution, and at each step it evolves according to a non-negative matrix, and hidden quantum Markov processes, where the memory state is a finite dimensional quantum state, and at each step it evolves according to a completely positive map. Here we show that the set of processes admitting a finite-dimensional explanation do not need to be explainable in terms of either classical probability or quantum mechanics. To wit, we exhibit families of processes that have a finite-dimensional explanation, defined manifestly by the dynamics of explicitly given GPT, but that do not admit a quantum, and therefore not even classical, explanation in finite dimension. Furthermore, we present a family of quantum processes on qubits and qutrits that do not admit a classical finite-dimensional realization, which includes examples introduced earlier by Fox, Rubin, Dharmadikari and Nadkarni as functions of infinite dimensional Markov chains, and lower bound the size of the memory of a classical model realizing a noisy version of the qubit processes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 319,117 |
2111.08600 | Towards Real-Time Monocular Depth Estimation for Robotics: A Survey | As an essential component for many autonomous driving and robotic activities such as ego-motion estimation, obstacle avoidance and scene understanding, monocular depth estimation (MDE) has attracted great attention from the computer vision and robotics communities. Over the past decades, a large number of methods have been developed. To the best of our knowledge, however, there is not a comprehensive survey of MDE. This paper aims to bridge this gap by reviewing 197 relevant articles published between 1970 and 2021. In particular, we provide a comprehensive survey of MDE covering various methods, introduce the popular performance evaluation metrics and summarize publically available datasets. We also summarize available open-source implementations of some representative methods and compare their performances. Furthermore, we review the application of MDE in some important robotic tasks. Finally, we conclude this paper by presenting some promising directions for future research. This survey is expected to assist readers to navigate this research field. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 266,754 |
2204.07205 | Expanding the Reach of Research Computing: A Landscape Study | Research-computing continues to play an ever increasing role in academia. Access to computing resources, however, varies greatly between institutions. Sustaining the growing need for computing skills and access to advanced cyberinfrastructure requires that computing resources be available to students at all levels of scholarship, including community colleges. The National Science Foundation-funded Building Research Innovation in Community Colleges (BRICCs) community set out to understand the challenges faced by administrators, researchers and faculty in building a sustainable research computing continuum that extends to smaller and two-year terminal degree granting institutions. BRICCs purpose is to address the technology gaps, and encourage the development of curriculum needed to grow a computationally proficient research workforce. Toward addressing these goals, we performed a landscape study that culminated with a community workshop. Here, we present our key findings from workshop discussions and identify next steps to be taken by BRICCs, funding agencies, and the broader cyberinfrastructure community. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 291,605 |
2212.04120 | Denoising Self-attentive Sequential Recommendation | Transformer-based sequential recommenders are very powerful for capturing both short-term and long-term sequential item dependencies. This is mainly attributed to their unique self-attention networks to exploit pairwise item-item interactions within the sequence. However, real-world item sequences are often noisy, which is particularly true for implicit feedback. For example, a large portion of clicks do not align well with user preferences, and many products end up with negative reviews or being returned. As such, the current user action only depends on a subset of items, not on the entire sequences. Many existing Transformer-based models use full attention distributions, which inevitably assign certain credits to irrelevant items. This may lead to sub-optimal performance if Transformers are not regularized properly. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 335,335 |
1604.08418 | Stable Throughput Region of the Two-User Broadcast Channel | In this paper we consider the two-user broadcast channel and we characterize its stable throughout region. We start the analysis by providing the stability region for the general case without any specific considerations on transmission and reception mechanisms. We also provide conditions for the stable throughput region to be convex. Subsequently, we consider the case where the transmitter uses superposition coding and we consider two special cases for the receivers. The first one is when both receivers treat interference as noise. The second is when the user with a better channel uses successive decoding and the other receiver treats interference as noise. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 55,210 |
2406.08952 | Self-orthogonal flags of codes and translation of flags of algebraic
geometry codes | A flag $C_0 \subsetneq C_1 \cdots \subsetneq C_s \subsetneq {\mathbb F}_q^n $ of linear codes is said to be self-orthogonal if the duals of the codes in the flag satisfy $C_{i}^\perp=C_{s-i}$, and it is said to satisfy the isometry-dual property with respect to an isometry vector ${\bf x}$ if $C_i^\perp={\bf x} C_{s-i}$ for $i=1, \dots, s$. We characterize complete (i.e. $s=n$) flags with the isometry-dual property by means of the existence of a word with non-zero coordinates in a certain linear subspace of ${\mathbb F}_q^n$. For flags of algebraic geometry (AG) codes we prove a so-called translation property of isometry-dual flags and give a construction of complete self-orthogonal flags, providing examples of self-orthogonal flags over some maximal function fields. At the end we characterize the divisors giving the isometry-dual property and the related isometry vectors showing that for each function field there is only a finite number of isometry vectors and that they are related by cyclic repetitions. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 463,703 |
2207.02152 | UniCR: Universally Approximated Certified Robustness via Randomized
Smoothing | We study certified robustness of machine learning classifiers against adversarial perturbations. In particular, we propose the first universally approximated certified robustness (UniCR) framework, which can approximate the robustness certification of any input on any classifier against any $\ell_p$ perturbations with noise generated by any continuous probability distribution. Compared with the state-of-the-art certified defenses, UniCR provides many significant benefits: (1) the first universal robustness certification framework for the above 4 'any's; (2) automatic robustness certification that avoids case-by-case analysis, (3) tightness validation of certified robustness, and (4) optimality validation of noise distributions used by randomized smoothing. We conduct extensive experiments to validate the above benefits of UniCR and the advantages of UniCR over state-of-the-art certified defenses against $\ell_p$ perturbations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 306,417 |
2412.16876 | MAGIC++: Efficient and Resilient Modality-Agnostic Semantic Segmentation
via Hierarchical Modality Selection | In this paper, we address the challenging modality-agnostic semantic segmentation (MaSS), aiming at centering the value of every modality at every feature granularity. Training with all available visual modalities and effectively fusing an arbitrary combination of them is essential for robust multi-modal fusion in semantic segmentation, especially in real-world scenarios, yet remains less explored to date. Existing approaches often place RGB at the center, treating other modalities as secondary, resulting in an asymmetric architecture. However, RGB alone can be limiting in scenarios like nighttime, where modalities such as event data excel. Therefore, a resilient fusion model must dynamically adapt to each modality's strengths while compensating for weaker inputs.To this end, we introduce the MAGIC++ framework, which comprises two key plug-and-play modules for effective multi-modal fusion and hierarchical modality selection that can be equipped with various backbone models. Firstly, we introduce a multi-modal interaction module to efficiently process features from the input multi-modal batches and extract complementary scene information with channel-wise and spatial-wise guidance. On top, a unified multi-scale arbitrary-modal selection module is proposed to utilize the aggregated features as the benchmark to rank the multi-modal features based on the similarity scores at hierarchical feature spaces. This way, our method can eliminate the dependence on RGB modality at every feature granularity and better overcome sensor failures and environmental noises while ensuring the segmentation performance. Under the common multi-modal setting, our method achieves state-of-the-art performance on both real-world and synthetic benchmarks. Moreover, our method is superior in the novel modality-agnostic setting, where it outperforms prior arts by a large margin. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 519,725 |
2002.00251 | Multi-Modal Music Information Retrieval: Augmenting Audio-Analysis with
Visual Computing for Improved Music Video Analysis | This thesis combines audio-analysis with computer vision to approach Music Information Retrieval (MIR) tasks from a multi-modal perspective. This thesis focuses on the information provided by the visual layer of music videos and how it can be harnessed to augment and improve tasks of the MIR research domain. The main hypothesis of this work is based on the observation that certain expressive categories such as genre or theme can be recognized on the basis of the visual content alone, without the sound being heard. This leads to the hypothesis that there exists a visual language that is used to express mood or genre. In a further consequence it can be concluded that this visual information is music related and thus should be beneficial for the corresponding MIR tasks such as music genre classification or mood recognition. A series of comprehensive experiments and evaluations are conducted which are focused on the extraction of visual information and its application in different MIR tasks. A custom dataset is created, suitable to develop and test visual features which are able to represent music related information. Evaluations range from low-level visual features to high-level concepts retrieved by means of Deep Convolutional Neural Networks. Additionally, new visual features are introduced capturing rhythmic visual patterns. In all of these experiments the audio-based results serve as benchmark for the visual and audio-visual approaches. The experiments are conducted for three MIR tasks Artist Identification, Music Genre Classification and Cross-Genre Classification. Experiments show that an audio-visual approach harnessing high-level semantic information gained from visual concept detection, outperforms audio-only genre-classification accuracy by 16.43%. | false | false | true | false | false | true | false | false | false | false | false | true | false | false | false | false | false | true | 162,304 |
2306.08433 | "Definition Modeling: To model definitions." Generating Definitions With
Little to No Semantics | Definition Modeling, the task of generating definitions, was first proposed as a means to evaluate the semantic quality of word embeddings-a coherent lexical semantic representations of a word in context should contain all the information necessary to generate its definition. The relative novelty of this task entails that we do not know which factors are actually relied upon by a Definition Modeling system. In this paper, we present evidence that the task may not involve as much semantics as one might expect: we show how an earlier model from the literature is both rather insensitive to semantic aspects such as explicit polysemy, as well as reliant on formal similarities between headwords and words occurring in its glosses, casting doubt on the validity of the task as a means to evaluate embeddings. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 373,413 |
2312.03612 | Physical Symbolic Optimization | We present a framework for constraining the automatic sequential generation of equations to obey the rules of dimensional analysis by construction. Combining this approach with reinforcement learning, we built $\Phi$-SO, a Physical Symbolic Optimization method for recovering analytical functions from physical data leveraging units constraints. Our symbolic regression algorithm achieves state-of-the-art results in contexts in which variables and constants have known physical units, outperforming all other methods on SRBench's Feynman benchmark in the presence of noise (exceeding 0.1%) and showing resilience even in the presence of significant (10%) levels of noise. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 413,326 |
2301.04050 | Design, Modeling and Control of a Quadruped Robot SPIDAR: Spherically
Vectorable and Distributed Rotors Assisted Air-Ground Amphibious Quadruped
Robot | Multimodal locomotion capability is an emerging topic in robotics field, and various novel mobile robots have been developed to enable the maneuvering in both terrestrial and aerial domains. Among these hybrid robots, several state-of-the-art bipedal \robots enable the complex walking motion which is interlaced with flying. These robots are also desired to have the manipulation ability; however, it is difficult for the current forms to keep stability with the joint motion in midair due to the central\ized rotor arrangement. Therefore, in this work, we develop a novel air-ground amphibious quadruped robot called SPIDAR which is assisted by spherically vectorable rotors distributed in each link to enable both walking motion and transformable flight. F\irst, we present a unique mechanical design for quadruped robot that enables terrestrial and aerial locomotion. We then reveal the modeling method for this hybrid robot platform, and further develop an integrated control strategy for both walking and fl\ying with joint motion. Finally, we demonstrate the feasibility of the proposed hybrid quadruped robot by performing a seamless motion that involves static walking and subsequent flight. To the best of our knowledge, this work is the first to achieve a \quadruped robot with multimodal locomotion capability, which also shows the potential of manipulation in multiple domains. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 339,962 |
2309.00189 | Data-Driven Safety Filter: An Input-Output Perspective | Implementation of learning-based control remains challenging due to the absence of safety guarantees. Safe control methods have turned to model-based safety filters to address these challenges, but this is paradoxical when the ultimate goal is a model-free, data-driven control solution. Addressing the core question of "Can we ensure the safety of any learning-based algorithm without explicit prediction models and state estimation?" this paper proposes a Data-Driven Safety Filter (DDSF) grounded in Behavioral System Theory (BST). The proposed method needs only a single system trajectory available in an offline dataset to modify unsafe learning inputs to safe inputs. This contribution addresses safe control in the input-output framework and therefore does not require full state measurements or explicit state estimation. Since no explicit model is required, the proposed safe control solution is not affected by unmodeled dynamics and unstructured uncertainty and can provide a safe solution for systems with unknown time delays. The effectiveness of the proposed DDSF is illustrated in simulation for a high-order six-degree-of-freedom aerial robot and a time-delay adaptive cruise control system. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 389,225 |
1804.01882 | Hyperbolic Entailment Cones for Learning Hierarchical Embeddings | Learning graph representations via low-dimensional embeddings that preserve relevant network properties is an important class of problems in machine learning. We here present a novel method to embed directed acyclic graphs. Following prior work, we first advocate for using hyperbolic spaces which provably model tree-like structures better than Euclidean geometry. Second, we view hierarchical relations as partial orders defined using a family of nested geodesically convex cones. We prove that these entailment cones admit an optimal shape with a closed form expression both in the Euclidean and hyperbolic spaces, and they canonically define the embedding learning process. Experiments show significant improvements of our method over strong recent baselines both in terms of representational capacity and generalization. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 94,303 |
2306.09132 | Enlarged Large Margin Loss for Imbalanced Classification | We propose a novel loss function for imbalanced classification. LDAM loss, which minimizes a margin-based generalization bound, is widely utilized for class-imbalanced image classification. Although, by using LDAM loss, it is possible to obtain large margins for the minority classes and small margins for the majority classes, the relevance to a large margin, which is included in the original softmax cross entropy loss, is not be clarified yet. In this study, we reconvert the formula of LDAM loss using the concept of the large margin softmax cross entropy loss based on the softplus function and confirm that LDAM loss includes a wider large margin than softmax cross entropy loss. Furthermore, we propose a novel Enlarged Large Margin (ELM) loss, which can further widen the large margin of LDAM loss. ELM loss utilizes the large margin for the maximum logit of the incorrect class in addition to the basic margin used in LDAM loss. Through experiments conducted on imbalanced CIFAR datasets and large-scale datasets with long-tailed distribution, we confirmed that classification accuracy was much improved compared with LDAM loss and conventional losses for imbalanced classification. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 373,691 |
2409.00057 | First Single-Carrier Transmission at Net Data Rates of 1.6 Tb/s over
9075 km and 2.4 Tb/s over 1210 km Using 300 GBd Dual-Polarization Signals and
Probabilistic Constellation Shaping | We report long-haul transmissions of single-carrier 300 GBd dual-polarization signals with optical arbitrary waveform generation and measurement. We demonstrate net 1.6 Tb/s over 9075 km with PCS-16QAM and 2.4 Tb/s over 1210 km with PCS-36QAM. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 484,746 |
2404.16411 | Label-Free Topic-Focused Summarization Using Query Augmentation | In today's data and information-rich world, summarization techniques are essential in harnessing vast text to extract key information and enhance decision-making and efficiency. In particular, topic-focused summarization is important due to its ability to tailor content to specific aspects of an extended text. However, this usually requires extensive labelled datasets and considerable computational power. This study introduces a novel method, Augmented-Query Summarization (AQS), for topic-focused summarization without the need for extensive labelled datasets, leveraging query augmentation and hierarchical clustering. This approach facilitates the transferability of machine learning models to the task of summarization, circumventing the need for topic-specific training. Through real-world tests, our method demonstrates the ability to generate relevant and accurate summaries, showing its potential as a cost-effective solution in data-rich environments. This innovation paves the way for broader application and accessibility in the field of topic-focused summarization technology, offering a scalable, efficient method for personalized content extraction. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 449,484 |
1408.6959 | Heterogeneous Recovery Rates against SIS Epidemics in Directed Networks | The nodes in communication networks are possibly and most likely equipped with different recovery resources, which allow them to recover from a virus with different rates. In this paper, we aim to understand know how to allocate the limited recovery resources to efficiently prevent the spreading of epidemics. We study the susceptible-infected-susceptible (SIS) epidemic model on directed scale-free networks. In the classic SIS model, a susceptible node can be infected by an infected neighbor with the infection rate $\beta$ and an infected node can be recovered to be susceptible again with the recovery rate $\delta$. In the steady state a fraction $y_\infty$ of nodes are infected, which shows how severely the network is infected. We propose to allocate the recovery rate $\delta_i$ for node $i$ according to its indegree and outdegree-$\delta_i\scriptsize{\sim}k_{i,in}^{\alpha_{in}}k_{i,out}^{\alpha_{out}}$, given the finite average recovery rate $\langle\delta\rangle$ representing the limited recovery resources over the whole network. We find that, by tuning the two scaling exponents $\alpha_{in}$ and $\alpha_{out}$, we can always reduce the infection fraction $y_\infty$ thus reducing the extent of infections, comparing to the homogeneous recovery rates allocation. Moreover, we can find our optimal strategy via the optimal choice of the exponent $\alpha_{in}$ and $\alpha_{out}$. Our optimal strategy indicates that when the recovery resources are sufficient, more resources should be allocated to the nodes with a larger indegree or outdegree, but when the recovery resource is very limited, only the nodes with a larger outdegree should be equipped with more resources. We also find that our optimal strategy works better when the recovery resources are sufficient but not yet able to make the epidemic die out, and when the indegree outdegree correlation is small. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 35,682 |
2308.01677 | Efficiency of First-Order Methods for Low-Rank Tensor Recovery with the
Tensor Nuclear Norm Under Strict Complementarity | We consider convex relaxations for recovering low-rank tensors based on constrained minimization over a ball induced by the tensor nuclear norm, recently introduced in \cite{tensor_tSVD}. We build on a recent line of results that considered convex relaxations for the recovery of low-rank matrices and established that under a strict complementarity condition (SC), both the convergence rate and per-iteration runtime of standard gradient methods may improve dramatically. We develop the appropriate strict complementarity condition for the tensor nuclear norm ball and obtain the following main results under this condition: 1. When the objective to minimize is of the form $f(\mX)=g(\mA\mX)+\langle{\mC,\mX}\rangle$ , where $g$ is strongly convex and $\mA$ is a linear map (e.g., least squares), a quadratic growth bound holds, which implies linear convergence rates for standard projected gradient methods, despite the fact that $f$ need not be strongly convex. 2. For a smooth objective function, when initialized in certain proximity of an optimal solution which satisfies SC, standard projected gradient methods only require SVD computations (for projecting onto the tensor nuclear norm ball) of rank that matches the tubal rank of the optimal solution. In particular, when the tubal rank is constant, this implies nearly linear (in the size of the tensor) runtime per iteration, as opposed to super linear without further assumptions. 3. For a nonsmooth objective function which admits a popular smooth saddle-point formulation, we derive similar results to the latter for the well known extragradient method. An additional contribution which may be of independent interest, is the rigorous extension of many basic results regarding tensors of arbitrary order, which were previously obtained only for third-order tensors. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 383,329 |
1208.2787 | Analysis and Construction of Functional Regenerating Codes with Uncoded
Repair for Distributed Storage Systems | Modern distributed storage systems apply redundancy coding techniques to stored data. One form of redundancy is based on regenerating codes, which can minimize the repair bandwidth, i.e., the amount of data transferred when repairing a failed storage node. Existing regenerating codes mainly require surviving storage nodes encode data during repair. In this paper, we study functional minimum storage regenerating (FMSR) codes, which enable uncoded repair without the encoding requirement in surviving nodes, while preserving the minimum repair bandwidth guarantees and also minimizing disk reads. Under double-fault tolerance settings, we formally prove the existence of FMSR codes, and provide a deterministic FMSR code construction that can significantly speed up the repair process. We further implement and evaluate our deterministic FMSR codes to show the benefits. Our work is built atop a practical cloud storage system that implements FMSR codes, and we provide theoretical validation to justify the practicality of FMSR codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 18,069 |
2401.09804 | Clickbait vs. Quality: How Engagement-Based Optimization Shapes the
Content Landscape in Online Platforms | Online content platforms commonly use engagement-based optimization when making recommendations. This encourages content creators to invest in quality, but also rewards gaming tricks such as clickbait. To understand the total impact on the content landscape, we study a game between content creators competing on the basis of engagement metrics and analyze the equilibrium decisions about investment in quality and gaming. First, we show the content created at equilibrium exhibits a positive correlation between quality and gaming, and we empirically validate this finding on a Twitter dataset. Using the equilibrium structure of the content landscape, we then examine the downstream performance of engagement-based optimization along several axes. Perhaps counterintuitively, the average quality of content consumed by users can decrease at equilibrium as gaming tricks become more costly for content creators to employ. Moreover, engagement-based optimization can perform worse in terms of user utility than a baseline with random recommendations, and engagement-based optimization is also suboptimal in terms of realized engagement relative to quality-based optimization. Altogether, our results highlight the need to consider content creator incentives when evaluating a platform's choice of optimization metric. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | true | 422,395 |
1910.13088 | Estimating the Density of States of Boolean Satisfiability Problems on
Classical and Quantum Computing Platforms | Given a Boolean formula $\phi(x)$ in conjunctive normal form (CNF), the density of states counts the number of variable assignments that violate exactly $e$ clauses, for all values of $e$. Thus, the density of states is a histogram of the number of unsatisfied clauses over all possible assignments. This computation generalizes both maximum-satisfiability (MAX-SAT) and model counting problems and not only provides insight into the entire solution space, but also yields a measure for the \emph{hardness} of the problem instance. Consequently, in real-world scenarios, this problem is typically infeasible even when using state-of-the-art algorithms. While finding an exact answer to this problem is a computationally intensive task, we propose a novel approach for estimating density of states based on the concentration of measure inequalities. The methodology results in a quadratic unconstrained binary optimization (QUBO), which is particularly amenable to quantum annealing-based solutions. We present the overall approach and compare results from the D-Wave quantum annealer against the best-known classical algorithms such as the Hamze-de Freitas-Selby (HFS) algorithm and satisfiability modulo theory (SMT) solvers. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 151,293 |
2408.10287 | Recognizing Beam Profiles from Silicon Photonics Gratings using
Transformer Model | Over the past decade, there has been extensive work in developing integrated silicon photonics (SiPh) gratings for the optical addressing of trapped ion qubits in the ion trap quantum computing community. However, when viewing beam profiles from infrared (IR) cameras, it is often difficult to determine the corresponding heights where the beam profiles are located. In this work, we developed transformer models to recognize the corresponding height categories of beam profiles of light from SiPh gratings. The model is trained using two techniques: (1) input patches, and (2) input sequence. For model trained with input patches, the model achieved recognition accuracy of 0.938. Meanwhile, model trained with input sequence shows lower accuracy of 0.895. However, when repeating the model-training 150 cycles, model trained with input patches shows inconsistent accuracy ranges between 0.445 to 0.959, while model trained with input sequence exhibit higher accuracy values between 0.789 to 0.936. The obtained outcomes can be expanded to various applications, including auto-focusing of light beam and auto-adjustment of z-axis stage to acquire desired beam profiles. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 481,802 |
2111.05955 | Keys to Accurate Feature Extraction Using Residual Spiking Neural
Networks | Spiking neural networks (SNNs) have become an interesting alternative to conventional artificial neural networks (ANN) thanks to their temporal processing capabilities and energy efficient implementations in neuromorphic hardware. However the challenges involved in training SNNs have limited their performance in terms of accuracy and thus their applications. Improving learning algorithms and neural architectures for a more accurate feature extraction is therefore one of the current priorities in SNN research. In this paper we present a study on the key components of modern spiking architectures. We design a spiking version of the successful residual network architecture and provide an in-depth study on the possible implementations of spiking residual connections. This study shows how, depending on the use case, the optimal residual connection implementation may vary. Additionally, we empirically compare different techniques in image classification datasets taken from the best performing networks. Our results provide a state of the art guide to SNN design, which allows to make informed choices when trying to build the optimal visual feature extractor. Finally, our network outperforms previous SNN architectures in CIFAR-10 (94.14%) and CIFAR-100 (74.65%) datasets and matches the state of the art in DVS-CIFAR10 (72.98%), with less parameters than the previous state of the art and without the need for ANN-SNN conversion. Code available at https://github.com/VicenteAlex/Spiking_ResNet | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 265,928 |
2112.12911 | Cluster-guided Image Synthesis with Unconditional Models | Generative Adversarial Networks (GANs) are the driving force behind the state-of-the-art in image generation. Despite their ability to synthesize high-resolution photo-realistic images, generating content with on-demand conditioning of different granularity remains a challenge. This challenge is usually tackled by annotating massive datasets with the attributes of interest, a laborious task that is not always a viable option. Therefore, it is vital to introduce control into the generation process of unsupervised generative models. In this work, we focus on controllable image generation by leveraging GANs that are well-trained in an unsupervised fashion. To this end, we discover that the representation space of intermediate layers of the generator forms a number of clusters that separate the data according to semantically meaningful attributes (e.g., hair color and pose). By conditioning on the cluster assignments, the proposed method is able to control the semantic class of the generated image. Our approach enables sampling from each cluster by Implicit Maximum Likelihood Estimation (IMLE). We showcase the efficacy of our approach on faces (CelebA-HQ and FFHQ), animals (Imagenet) and objects (LSUN) using different pre-trained generative models. The results highlight the ability of our approach to condition image generation on attributes like gender, pose and hair style on faces, as well as a variety of features on different object classes. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 273,079 |
2402.02082 | GliDe with a CaPE: A Low-Hassle Method to Accelerate Speculative
Decoding | Speculative decoding is a relatively new decoding framework that leverages small and efficient draft models to reduce the latency of LLMs. In this study, we introduce GliDe and CaPE, two low-hassle modifications to vanilla speculative decoding to further improve the decoding speed of a frozen LLM. Specifically, GliDe is a modified draft model architecture that reuses the cached keys and values from the target LLM, while CaPE is a proposal expansion method that uses the draft model's confidence scores to help select additional candidate tokens for verification. Extensive experiments on different benchmarks demonstrate that our proposed GliDe draft model significantly reduces the expected decoding latency. Additional evaluation using walltime reveals that GliDe can accelerate Vicuna models up to 2.17x and further extend the improvement to 2.61x with CaPE. We will release our code, data, and the trained draft models. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 426,375 |
2502.03038 | The Cake that is Intelligence and Who Gets to Bake it: An AI Analogy and
its Implications for Participation | In a widely popular analogy by Turing Award Laureate Yann LeCun, machine intelligence has been compared to cake - where unsupervised learning forms the base, supervised learning adds the icing, and reinforcement learning is the cherry on top. We expand this 'cake that is intelligence' analogy from a simple structural metaphor to the full life-cycle of AI systems, extending it to sourcing of ingredients (data), conception of recipes (instructions), the baking process (training), and the tasting and selling of the cake (evaluation and distribution). Leveraging our re-conceptualization, we describe each step's entailed social ramifications and how they are bounded by statistical assumptions within machine learning. Whereas these technical foundations and social impacts are deeply intertwined, they are often studied in isolation, creating barriers that restrict meaningful participation. Our re-conceptualization paves the way to bridge this gap by mapping where technical foundations interact with social outcomes, highlighting opportunities for cross-disciplinary dialogue. Finally, we conclude with actionable recommendations at each stage of the metaphorical AI cake's life-cycle, empowering prospective AI practitioners, users, and researchers, with increased awareness and ability to engage in broader AI discourse. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 530,573 |
2101.04224 | Challenges and approaches to time-series forecasting in data center
telemetry: A Survey | Time-series forecasting has been an important research domain for so many years. Its applications include ECG predictions, sales forecasting, weather conditions, even COVID-19 spread predictions. These applications have motivated many researchers to figure out an optimal forecasting approach, but the modeling approach also changes as the application domain changes. This work has focused on reviewing different forecasting approaches for telemetry data predictions collected at data centers. Forecasting of telemetry data is a critical feature of network and data center management products. However, there are multiple options of forecasting approaches that range from a simple linear statistical model to high capacity deep learning architectures. In this paper, we attempted to summarize and evaluate the performance of well known time series forecasting techniques. We hope that this evaluation provides a comprehensive summary to innovate in forecasting approaches for telemetry data. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 215,090 |
2310.00436 | Enhancing Representation Generalization in Authorship Identification | Authorship identification ascertains the authorship of texts whose origins remain undisclosed. That authorship identification techniques work as reliably as they do has been attributed to the fact that authorial style is properly captured and represented. Although modern authorship identification methods have evolved significantly over the years and have proven effective in distinguishing authorial styles, the generalization of stylistic features across domains has not been systematically reviewed. The presented work addresses the challenge of enhancing the generalization of stylistic representations in authorship identification, particularly when there are discrepancies between training and testing samples. A comprehensive review of empirical studies was conducted, focusing on various stylistic features and their effectiveness in representing an author's style. The influencing factors such as topic, genre, and register on writing style were also explored, along with strategies to mitigate their impact. While some stylistic features, like character n-grams and function words, have proven to be robust and discriminative, others, such as content words, can introduce biases and hinder cross-domain generalization. Representations learned using deep learning models, especially those incorporating character n-grams and syntactic information, show promise in enhancing representation generalization. The findings underscore the importance of selecting appropriate stylistic features for authorship identification, especially in cross-domain scenarios. The recognition of the strengths and weaknesses of various linguistic features paves the way for more accurate authorship identification in diverse contexts. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 395,982 |
1603.05614 | Streaming Algorithms for News and Scientific Literature Recommendation:
Submodular Maximization with a d-Knapsack Constraint | Submodular maximization problems belong to the family of combinatorial optimization problems and enjoy wide applications. In this paper, we focus on the problem of maximizing a monotone submodular function subject to a $d$-knapsack constraint, for which we propose a streaming algorithm that achieves a $\left(\frac{1}{1+2d}-\epsilon\right)$-approximation of the optimal value, while it only needs one single pass through the dataset without storing all the data in the memory. In our experiments, we extensively evaluate the effectiveness of our proposed algorithm via two applications: news recommendation and scientific literature recommendation. It is observed that the proposed streaming algorithm achieves both execution speedup and memory saving by several orders of magnitude, compared with existing approaches. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 53,380 |
2403.12422 | Jetfire: Efficient and Accurate Transformer Pretraining with INT8 Data
Flow and Per-Block Quantization | Pretraining transformers are generally time-consuming. Fully quantized training (FQT) is a promising approach to speed up pretraining. However, most FQT methods adopt a quantize-compute-dequantize procedure, which often leads to suboptimal speedup and significant performance degradation when used in transformers due to the high memory access overheads and low-precision computations. In this work, we propose Jetfire, an efficient and accurate INT8 training method specific to transformers. Our method features an INT8 data flow to optimize memory access and a per-block quantization method to maintain the accuracy of pretrained transformers. Extensive experiments demonstrate that our INT8 FQT method achieves comparable accuracy to the FP16 training baseline and outperforms the existing INT8 training works for transformers. Moreover, for a standard transformer block, our method offers an end-to-end training speedup of 1.42x and a 1.49x memory reduction compared to the FP16 baseline. Our code is open sourced at https://github.com/thu-ml/Jetfire-INT8Training. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 439,174 |
2201.05337 | A Survey of Controllable Text Generation using Transformer-based
Pre-trained Language Models | Controllable Text Generation (CTG) is emerging area in the field of natural language generation (NLG). It is regarded as crucial for the development of advanced text generation technologies that better meet the specific constraints in practical applications. In recent years, methods using large-scale pre-trained language models (PLMs), in particular the widely used transformer-based PLMs, have become a new paradigm of NLG, allowing generation of more diverse and fluent text. However, due to the limited level of interpretability of deep neural networks, the controllability of these methods need to be guaranteed. To this end, controllable text generation using transformer-based PLMs has become a rapidly growing yet challenging new research hotspot. A diverse range of approaches have emerged in the recent 3-4 years, targeting different CTG tasks that require different types of controlled constraints. In this paper, we present a systematic critical review on the common tasks, main approaches, and evaluation methods in this area. Finally, we discuss the challenges that the field is facing, and put forward various promising future directions. To the best of our knowledge, this is the first survey paper to summarize the state-of-the-art CTG techniques from the perspective of Transformer-based PLMs. We hope it can help researchers and practitioners in the related fields to quickly track the academic and technological frontier, providing them with a landscape of the area and a roadmap for future research. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 275,364 |
1701.08946 | Variable selection for clustering with Gaussian mixture models: state of
the art | The mixture models have become widely used in clustering, given its probabilistic framework in which its based, however, for modern databases that are characterized by their large size, these models behave disappointingly in setting out the model, making essential the selection of relevant variables for this type of clustering. After recalling the basics of clustering based on a model, this article will examine the variable selection methods for model-based clustering, as well as presenting opportunities for improvement of these methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 67,561 |
1805.01702 | Beyond the Click-Through Rate: Web Link Selection with Multi-level
Feedback | The web link selection problem is to select a small subset of web links from a large web link pool, and to place the selected links on a web page that can only accommodate a limited number of links, e.g., advertisements, recommendations, or news feeds. Despite the long concerned click-through rate which reflects the attractiveness of the link itself, the revenue can only be obtained from user actions after clicks, e.g., purchasing after being directed to the product pages by recommendation links. Thus, the web links have an intrinsic \emph{multi-level feedback structure}. With this observation, we consider the context-free web link selection problem, where the objective is to maximize revenue while ensuring that the attractiveness is no less than a preset threshold. The key challenge of the problem is that each link's multi-level feedbacks are stochastic, and unobservable unless the link is selected. We model this problem with a constrained stochastic multi-armed bandit formulation, and design an efficient link selection algorithm, called Constrained Upper Confidence Bound algorithm (\textbf{Con-UCB}), and prove $O(\sqrt{T\ln T})$ bounds on both the regret and the violation of the attractiveness constraint. We conduct extensive experiments on three real-world datasets, and show that \textbf{Con-UCB} outperforms state-of-the-art context-free bandit algorithms concerning the multi-level feedback structure. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 96,696 |
1101.4999 | List decoding of a class of affine variety codes | Consider a polynomial $F$ in $m$ variables and a finite point ensemble $S=S_1 \times ... \times S_m$. When given the leading monomial of $F$ with respect to a lexicographic ordering we derive improved information on the possible number of zeros of $F$ of multiplicity at least $r$ from $S$. We then use this information to design a list decoding algorithm for a large class of affine variety codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 8,922 |
2404.07200 | Toward a Better Understanding of Fourier Neural Operators from a
Spectral Perspective | In solving partial differential equations (PDEs), Fourier Neural Operators (FNOs) have exhibited notable effectiveness. However, FNO is observed to be ineffective with large Fourier kernels that parameterize more frequencies. Current solutions rely on setting small kernels, restricting FNO's ability to capture complex PDE data in real-world applications. This paper offers empirical insights into FNO's difficulty with large kernels through spectral analysis: FNO exhibits a unique Fourier parameterization bias, excelling at learning dominant frequencies in target data while struggling with non-dominant frequencies. To mitigate such a bias, we propose SpecB-FNO to enhance the capture of non-dominant frequencies by adopting additional residual modules to learn from the previous ones' prediction residuals iteratively. By effectively utilizing large Fourier kernels, SpecB-FNO achieves better prediction accuracy on diverse PDE applications, with an average improvement of 50%. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 445,752 |
2208.04028 | Deep Computational Model for the Inference of Ventricular Activation
Properties | Patient-specific cardiac computational models are essential for the efficient realization of precision medicine and in-silico clinical trials using digital twins. Cardiac digital twins can provide non-invasive characterizations of cardiac functions for individual patients, and therefore are promising for the patient-specific diagnosis and therapy stratification. However, current workflows for both the anatomical and functional twinning phases, referring to the inference of model anatomy and parameter from clinical data, are not sufficiently efficient, robust, and accurate. In this work, we propose a deep learning based patient-specific computational model, which can fuse both anatomical and electrophysiological information for the inference of ventricular activation properties, i.e., conduction velocities and root nodes. The activation properties can provide a quantitative assessment of cardiac electrophysiological function for the guidance of interventional procedures. We employ the Eikonal model to generate simulated electrocardiogram (ECG) with ground truth properties to train the inference model, where specific patient information has also been considered. For evaluation, we test the model on the simulated data and obtain generally promising results with fast computational time. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 311,974 |
2411.01432 | Meta-Exploiting Frequency Prior for Cross-Domain Few-Shot Learning | Meta-learning offers a promising avenue for few-shot learning (FSL), enabling models to glean a generalizable feature embedding through episodic training on synthetic FSL tasks in a source domain. Yet, in practical scenarios where the target task diverges from that in the source domain, meta-learning based method is susceptible to over-fitting. To overcome this, we introduce a novel framework, Meta-Exploiting Frequency Prior for Cross-Domain Few-Shot Learning, which is crafted to comprehensively exploit the cross-domain transferable image prior that each image can be decomposed into complementary low-frequency content details and high-frequency robust structural characteristics. Motivated by this insight, we propose to decompose each query image into its high-frequency and low-frequency components, and parallel incorporate them into the feature embedding network to enhance the final category prediction. More importantly, we introduce a feature reconstruction prior and a prediction consistency prior to separately encourage the consistency of the intermediate feature as well as the final category prediction between the original query image and its decomposed frequency components. This allows for collectively guiding the network's meta-learning process with the aim of learning generalizable image feature embeddings, while not introducing any extra computational cost in the inference phase. Our framework establishes new state-of-the-art results on multiple cross-domain few-shot learning benchmarks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 505,071 |
2205.10692 | All You Need Is Logs: Improving Code Completion by Learning from
Anonymous IDE Usage Logs | In this work, we propose an approach for collecting completion usage logs from the users in an IDE and using them to train a machine learning based model for ranking completion candidates. We developed a set of features that describe completion candidates and their context, and deployed their anonymized collection in the Early Access Program of IntelliJ-based IDEs. We used the logs to collect a dataset of code completions from users, and employed it to train a ranking CatBoost model. Then, we evaluated it in two settings: on a held-out set of the collected completions and in a separate A/B test on two different groups of users in the IDE. Our evaluation shows that using a simple ranking model trained on the past user behavior logs significantly improved code completion experience. Compared to the default heuristics-based ranking, our model demonstrated a decrease in the number of typing actions necessary to perform the completion in the IDE from 2.073 to 1.832. The approach adheres to privacy requirements and legal constraints, since it does not require collecting personal information, performing all the necessary anonymization on the client's side. Importantly, it can be improved continuously: implementing new features, collecting new data, and evaluating new models - this way, we have been using it in production since the end of 2020. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 297,821 |
2406.08847 | Roping in Uncertainty: Robustness and Regularization in Markov Games | We study robust Markov games (RMG) with $s$-rectangular uncertainty. We show a general equivalence between computing a robust Nash equilibrium (RNE) of a $s$-rectangular RMG and computing a Nash equilibrium (NE) of an appropriately constructed regularized MG. The equivalence result yields a planning algorithm for solving $s$-rectangular RMGs, as well as provable robustness guarantees for policies computed using regularized methods. However, we show that even for just reward-uncertain two-player zero-sum matrix games, computing an RNE is PPAD-hard. Consequently, we derive a special uncertainty structure called efficient player-decomposability and show that RNE for two-player zero-sum RMG in this class can be provably solved in polynomial time. This class includes commonly used uncertainty sets such as $L_1$ and $L_\infty$ ball uncertainty sets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 463,656 |
2408.14698 | Smart Multi-Modal Search: Contextual Sparse and Dense Embedding
Integration in Adobe Express | As user content and queries become increasingly multi-modal, the need for effective multi-modal search systems has grown. Traditional search systems often rely on textual and metadata annotations for indexed images, while multi-modal embeddings like CLIP enable direct search using text and image embeddings. However, embedding-based approaches face challenges in integrating contextual features such as user locale and recency. Building a scalable multi-modal search system requires fine-tuning several components. This paper presents a multi-modal search architecture and a series of AB tests that optimize embeddings and multi-modal technologies in Adobe Express template search. We address considerations such as embedding model selection, the roles of embeddings in matching and ranking, and the balance between dense and sparse embeddings. Our iterative approach demonstrates how utilizing sparse, dense, and contextual features enhances short and long query search, significantly reduces null rates (over 70\%), and increases click-through rates (CTR). Our findings provide insights into developing robust multi-modal search systems, thereby enhancing relevance for complex queries. | false | false | false | false | true | true | false | false | true | false | false | true | false | false | false | false | false | false | 483,633 |
2212.03411 | A Flexible Nadaraya-Watson Head Can Offer Explainable and Calibrated
Classification | In this paper, we empirically analyze a simple, non-learnable, and nonparametric Nadaraya-Watson (NW) prediction head that can be used with any neural network architecture. In the NW head, the prediction is a weighted average of labels from a support set. The weights are computed from distances between the query feature and support features. This is in contrast to the dominant approach of using a learnable classification head (e.g., a fully-connected layer) on the features, which can be challenging to interpret and can yield poorly calibrated predictions. Our empirical results on an array of computer vision tasks demonstrate that the NW head can yield better calibration with comparable accuracy compared to its parametric counterpart, particularly in data-limited settings. To further increase inference-time efficiency, we propose a simple approach that involves a clustering step run on the training set to create a relatively small distilled support set. Furthermore, we explore two means of interpretability/explainability that fall naturally from the NW head. The first is the label weights, and the second is our novel concept of the ``support influence function,'' which is an easy-to-compute metric that quantifies the influence of a support element on the prediction for a given query. As we demonstrate in our experiments, the influence function can allow the user to debug a trained model. We believe that the NW head is a flexible, interpretable, and highly useful building block that can be used in a range of applications. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 335,108 |
2304.14630 | Let the Chart Spark: Embedding Semantic Context into Chart with
Text-to-Image Generative Model | Pictorial visualization seamlessly integrates data and semantic context into visual representation, conveying complex information in a manner that is both engaging and informative. Extensive studies have been devoted to developing authoring tools to simplify the creation of pictorial visualizations. However, mainstream works mostly follow a retrieving-and-editing pipeline that heavily relies on retrieved visual elements from a dedicated corpus, which often compromise the data integrity. Text-guided generation methods are emerging, but may have limited applicability due to its predefined recognized entities. In this work, we propose ChartSpark, a novel system that embeds semantic context into chart based on text-to-image generative model. ChartSpark generates pictorial visualizations conditioned on both semantic context conveyed in textual inputs and data information embedded in plain charts. The method is generic for both foreground and background pictorial generation, satisfying the design practices identified from an empirical research into existing pictorial visualizations. We further develop an interactive visual interface that integrates a text analyzer, editing module, and evaluation module to enable users to generate, modify, and assess pictorial visualizations. We experimentally demonstrate the usability of our tool, and conclude with a discussion of the potential of using text-to-image generative model combined with interactive interface for visualization design. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 361,046 |
2106.07582 | Non Gaussian Denoising Diffusion Models | Generative diffusion processes are an emerging and effective tool for image and speech generation. In the existing methods, the underline noise distribution of the diffusion process is Gaussian noise. However, fitting distributions with more degrees of freedom, could help the performance of such generative models. In this work, we investigate other types of noise distribution for the diffusion process. Specifically, we show that noise from Gamma distribution provides improved results for image and speech generation. Moreover, we show that using a mixture of Gaussian noise variables in the diffusion process improves the performance over a diffusion process that is based on a single distribution. Our approach preserves the ability to efficiently sample state in the training diffusion process while using Gamma noise and a mixture of noise. | false | false | true | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 240,984 |
1901.05623 | Double variational principle for mean dimension | We develop a variational principle between mean dimension theory and rate distortion theory. We consider a minimax problem about the rate distortion dimension with respect to two variables (metrics and measures). We prove that the minimax value is equal to the mean dimension for a dynamical system with the marker property. The proof exhibits a new combination of ergodic theory, rate distortion theory and geometric measure theory. Along the way of the proof, we also show that if a dynamical system has the marker property then it has a metric for which the upper metric mean dimension is equal to the mean dimension. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 118,825 |
2104.02478 | Topological Regularization for Graph Neural Networks Augmentation | The complexity and non-Euclidean structure of graph data hinder the development of data augmentation methods similar to those in computer vision. In this paper, we propose a feature augmentation method for graph nodes based on topological regularization, in which topological structure information is introduced into end-to-end model. Specifically, we first obtain topology embedding of nodes through unsupervised representation learning method based on random walk. Then, the topological embedding as additional features and the original node features are input into a dual graph neural network for propagation, and two different high-order neighborhood representations of nodes are obtained. On this basis, we propose a regularization technique to bridge the differences between the two different node representations, eliminate the adverse effects caused by the topological features of graphs directly used, and greatly improve the performance. We have carried out extensive experiments on a large number of datasets to prove the effectiveness of our model. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 228,732 |
2409.09357 | Joint Semantic Knowledge Distillation and Masked Acoustic Modeling for
Full-band Speech Restoration with Improved Intelligibility | Speech restoration aims at restoring full-band speech with high quality and intelligibility, considering a diverse set of distortions. MaskSR is a recently proposed generative model for this task. As other models of its kind, MaskSR attains high quality but, as we show, intelligibility can be substantially improved. We do so by boosting the speech encoder component of MaskSR with predictions of semantic representations of the target speech, using a pre-trained self-supervised teacher model. Then, a masked language model is conditioned on the learned semantic features to predict acoustic tokens that encode low level spectral details of the target speech. We show that, with the same MaskSR model capacity and inference time, the proposed model, MaskSR2, significantly reduces the word error rate, a typical metric for intelligibility. MaskSR2 also achieves competitive word error rate among other models, while providing superior quality. An ablation study shows the effectiveness of various semantic representations. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 488,287 |
2210.01797 | Ten Years after ImageNet: A 360{\deg} Perspective on AI | It is ten years since neural networks made their spectacular comeback. Prompted by this anniversary, we take a holistic perspective on Artificial Intelligence (AI). Supervised Learning for cognitive tasks is effectively solved - provided we have enough high-quality labeled data. However, deep neural network models are not easily interpretable, and thus the debate between blackbox and whitebox modeling has come to the fore. The rise of attention networks, self-supervised learning, generative modeling, and graph neural networks has widened the application space of AI. Deep Learning has also propelled the return of reinforcement learning as a core building block of autonomous decision making systems. The possible harms made possible by new AI technologies have raised socio-technical issues such as transparency, fairness, and accountability. The dominance of AI by Big-Tech who control talent, computing resources, and most importantly, data may lead to an extreme AI divide. Failure to meet high expectations in high profile, and much heralded flagship projects like self-driving vehicles could trigger another AI winter. | false | false | false | false | true | true | true | false | false | false | false | false | false | false | false | false | false | false | 321,405 |
1206.6380 | Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring | In this paper we address the following question: Can we approximately sample from a Bayesian posterior distribution if we are only allowed to touch a small mini-batch of data-items for every sample we generate?. An algorithm based on the Langevin equation with stochastic gradients (SGLD) was previously proposed to solve this, but its mixing rate was slow. By leveraging the Bayesian Central Limit Theorem, we extend the SGLD algorithm so that at high mixing rates it will sample from a normal approximation of the posterior, while for slow mixing rates it will mimic the behavior of SGLD with a pre-conditioner matrix. As a bonus, the proposed algorithm is reminiscent of Fisher scoring (with stochastic gradients) and as such an efficient optimizer during burn-in. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 16,915 |
2406.17810 | PIC2O-Sim: A Physics-Inspired Causality-Aware Dynamic Convolutional
Neural Operator for Ultra-Fast Photonic Device FDTD Simulation | The finite-difference time-domain (FDTD) method, which is important in photonic hardware design flow, is widely adopted to solve time-domain Maxwell equations. However, FDTD is known for its prohibitive runtime cost, taking minutes to hours to simulate a single device. Recently, AI has been applied to realize orders-of-magnitude speedup in partial differential equation (PDE) solving. However, AI-based FDTD solvers for photonic devices have not been clearly formulated. Directly applying off-the-shelf models to predict the optical field dynamics shows unsatisfying fidelity and efficiency since the model primitives are agnostic to the unique physical properties of Maxwell equations and lack algorithmic customization. In this work, we thoroughly investigate the synergy between neural operator designs and the physical property of Maxwell equations and introduce a physics-inspired AI-based FDTD prediction framework PIC2O-Sim which features a causality-aware dynamic convolutional neural operator as its backbone model that honors the space-time causality constraints via careful receptive field configuration and explicitly captures the permittivity-dependent light propagation behavior via an efficient dynamic convolution operator. Meanwhile, we explore the trade-offs among prediction scalability, fidelity, and efficiency via a multi-stage partitioned time-bundling technique in autoregressive prediction. Multiple key techniques have been introduced to mitigate iterative error accumulation while maintaining efficiency advantages during autoregressive field prediction. Extensive evaluations on three challenging photonic device simulation tasks have shown the superiority of our PIC2O-Sim method, showing 51.2% lower roll-out prediction error, 23.5 times fewer parameters than state-of-the-art neural operators, providing 300-600x higher simulation speed than an open-source FDTD numerical solver. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 467,741 |
1307.5736 | Speaker Independent Continuous Speech to Text Converter for Mobile
Application | An efficient speech to text converter for mobile application is presented in this work. The prime motive is to formulate a system which would give optimum performance in terms of complexity, accuracy, delay and memory requirements for mobile environment. The speech to text converter consists of two stages namely front-end analysis and pattern recognition. The front end analysis involves preprocessing and feature extraction. The traditional voice activity detection algorithms which track only energy cannot successfully identify potential speech from input because the unwanted part of the speech also has some energy and appears to be speech. In the proposed system, VAD that calculates energy of high frequency part separately as zero crossing rate to differentiate noise from speech is used. Mel Frequency Cepstral Coefficient (MFCC) is used as feature extraction method and Generalized Regression Neural Network is used as recognizer. MFCC provides low word error rate and better feature extraction. Neural Network improves the accuracy. Thus a small database containing all possible syllable pronunciation of the user is sufficient to give recognition accuracy closer to 100%. Thus the proposed technique entertains realization of real time speaker independent applications like mobile phones, PDAs etc. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | 25,979 |
1011.0520 | Adaptive Algorithms for Coverage Control and Space Partitioning in
Mobile Robotic Networks | This paper considers deployment problems where a mobile robotic network must optimize its configuration in a distributed way in order to minimize a steady-state cost function that depends on the spatial distribution of certain probabilistic events of interest. Moreover, it is assumed that the event location distribution is a priori unknown, and can only be progressively inferred from the observation of the actual event occurrences. Three classes of problems are discussed in detail: coverage control problems, spatial partitioning problems, and dynamic vehicle routing problems. In each case, distributed stochastic gradient algorithms optimizing the performance objective are presented. The stochastic gradient view simplifies and generalizes previously proposed solutions, and is applicable to new complex scenarios, such as adaptive coverage involving heterogeneous agents. Remarkably, these algorithms often take the form of simple distributed rules that could be implemented on resource-limited platforms. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 8,117 |
2112.06204 | Few-Shot Out-of-Domain Transfer Learning of Natural Language
Explanations in a Label-Abundant Setup | Training a model to provide natural language explanations (NLEs) for its predictions usually requires the acquisition of task-specific NLEs, which is time- and resource-consuming. A potential solution is the few-shot out-of-domain transfer of NLEs from a parent task with many NLEs to a child task. In this work, we examine the setup in which the child task has few NLEs but abundant labels. We establish four few-shot transfer learning methods that cover the possible fine-tuning combinations of the labels and NLEs for the parent and child tasks. We transfer explainability from a large natural language inference dataset (e-SNLI) separately to two child tasks: (1) hard cases of pronoun resolution, where we introduce the small-e-WinoGrande dataset of NLEs on top of the WinoGrande dataset, and (2)~commonsense validation (ComVE). Our results demonstrate that the parent task helps with NLE generation and we establish the best methods for this setup. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 271,095 |
1506.05254 | Gradient Estimation Using Stochastic Computation Graphs | In a variety of problems originating in supervised, unsupervised, and reinforcement learning, the loss function is defined by an expectation over a collection of random variables, which might be part of a probabilistic model or the external world. Estimating the gradient of this loss function, using samples, lies at the core of gradient-based learning algorithms for these problems. We introduce the formalism of stochastic computation graphs---directed acyclic graphs that include both deterministic functions and conditional probability distributions---and describe how to easily and automatically derive an unbiased estimator of the loss function's gradient. The resulting algorithm for computing the gradient estimator is a simple modification of the standard backpropagation algorithm. The generic scheme we propose unifies estimators derived in variety of prior work, along with variance-reduction techniques therein. It could assist researchers in developing intricate models involving a combination of stochastic and deterministic operations, enabling, for example, attention, memory, and control actions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 44,280 |
2410.04147 | Can the Variation of Model Weights be used as a Criterion for Self-Paced
Multilingual NMT? | Many-to-one neural machine translation systems improve over one-to-one systems when training data is scarce. In this paper, we design and test a novel algorithm for selecting the language of minibatches when training such systems. The algorithm changes the language of the minibatch when the weights of the model do not evolve significantly, as measured by the smoothed KL divergence between all layers of the Transformer network. This algorithm outperforms the use of alternating monolingual batches, but not the use of shuffled batches, in terms of translation quality (measured with BLEU and COMET) and convergence speed. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 495,145 |
2109.07783 | Towards Non-Line-of-Sight Photography | Non-line-of-sight (NLOS) imaging is based on capturing the multi-bounce indirect reflections from the hidden objects. Active NLOS imaging systems rely on the capture of the time of flight of light through the scene, and have shown great promise for the accurate and robust reconstruction of hidden scenes without the need for specialized scene setups and prior assumptions. Despite that existing methods can reconstruct 3D geometries of the hidden scene with excellent depth resolution, accurately recovering object textures and appearance with high lateral resolution remains an challenging problem. In this work, we propose a new problem formulation, called NLOS photography, to specifically address this deficiency. Rather than performing an intermediate estimate of the 3D scene geometry, our method follows a data-driven approach and directly reconstructs 2D images of a NLOS scene that closely resemble the pictures taken with a conventional camera from the location of the relay wall. This formulation largely simplifies the challenging reconstruction problem by bypassing the explicit modeling of 3D geometry, and enables the learning of a deep model with a relatively small training dataset. The results are NLOS reconstructions of unprecedented lateral resolution and image quality. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 255,654 |
2212.10937 | DCC: A Cascade based Approach to Detect Communities in Social Networks | Community detection in Social Networks is associated with finding and grouping the most similar nodes inherent in the network. These similar nodes are identified by computing tie strength. Stronger ties indicates higher proximity shared by connected node pairs. This work is motivated by Granovetter's argument that suggests that strong ties lies within densely connected nodes and the theory that community cores in real-world networks are densely connected. In this paper, we have introduced a novel method called \emph{Disjoint Community detection using Cascades (DCC)} which demonstrates the effectiveness of a new local density based tie strength measure on detecting communities. Here, tie strength is utilized to decide the paths followed for propagating information. The idea is to crawl through the tuple information of cascades towards the community core guided by increasing tie strength. Considering the cascade generation step, a novel preferential membership method has been developed to assign community labels to unassigned nodes. The efficacy of $DCC$ has been analyzed based on quality and accuracy on several real-world datasets and baseline community detection algorithms. | false | false | false | true | false | true | true | false | false | false | false | true | false | false | false | false | false | false | 337,660 |
2003.13848 | Code Prediction by Feeding Trees to Transformers | We advance the state-of-the-art in the accuracy of code prediction (next token prediction) used in autocomplete systems. First, we report that using the recently proposed Transformer architecture even out-of-the-box outperforms previous neural and non-neural systems for code prediction. We then show that by making the Transformer architecture aware of the syntactic structure of code, we further increase the margin by which a Transformer-based system outperforms previous systems. With this, it outperforms the accuracy of an RNN-based system (similar to Hellendoorn et al. 2018) by 18.3%, the Deep3 system (Raychev et al 2016) by 14.1%, and an adaptation of Code2Seq (Alon et al., 2018) for code prediction by 14.4%. We present in the paper several ways of communicating the code structure to the Transformer, which is fundamentally built for processing sequence data. We provide a comprehensive experimental evaluation of our proposal, along with alternative design choices, on a standard Python dataset, as well as on a Facebook internal Python corpus. Our code and data preparation pipeline will be available in open source. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 170,325 |
2102.07539 | Crowdsourcing Parallel Corpus for English-Oromo Neural Machine
Translation using Community Engagement Platform | Even though Afaan Oromo is the most widely spoken language in the Cushitic family by more than fifty million people in the Horn and East Africa, it is surprisingly resource-scarce from a technological point of view. The increasing amount of various useful documents written in English language brings to investigate the machine that can translate those documents and make it easily accessible for local language. The paper deals with implementing a translation of English to Afaan Oromo and vice versa using Neural Machine Translation. But the implementation is not very well explored due to the limited amount and diversity of the corpus. However, using a bilingual corpus of just over 40k sentence pairs we have collected, this study showed a promising result. About a quarter of this corpus is collected via Community Engagement Platform (CEP) that was implemented to enrich the parallel corpus through crowdsourcing translations. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 220,139 |
1901.03857 | Deep-learning-based identification of odontogenic keratocysts in
hematoxylin- and eosin-stained jaw cyst specimens | The aim of this study was to develop a digital histopathology system for identifying odontogenic keratocysts in hematoxylin- and eosin-stained tissue specimens of jaw cysts. Approximately 5000 microscopy images with 400$\times$ magnification were obtained from 199 odontogenic keratocysts, 208 dentigerous cysts, and 55 radicular cysts. A proportion of these images were used to make training patches, which were annotated as belonging to one of the following three classes: keratocysts, non-keratocysts, and stroma. The patches for the cysts contained the complete lining epithelium, with the cyst cavity being present on the upper side. The convolutional neural network (CNN) VGG16 was finetuned to this dataset. The trained CNN could recognize the basal cell palisading pattern, which is the definitive criterion for diagnosing keratocysts. Some of the remaining images were scanned and analyzed by the trained CNN, whose output was then used to train another CNN for binary classification (keratocyst or not). The area under the receiver operating characteristics curve for the entire algorithm was 0.997 for the test dataset. Thus, the proposed patch classification strategy is usable for automated keratocyst diagnosis. However, further optimization must be performed to make it suitable for practical use. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 118,502 |
2005.06313 | Stealth Communication with Vanishing Power over Binary Symmetric
Channels | A framework for stealth communication with vanishing power (VP) is presented by studying binary symmetric channels. Coding theorems are proved by modifying Gallager's error exponents for VP and by applying resolvability exponents. The analysis unifies and generalizes existing rate bounds for covert and stealth communication. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 176,974 |
1302.3860 | ScalienDB: Designing and Implementing a Distributed Database using Paxos | ScalienDB is a scalable, replicated database built on top of the Paxos algorithm. It was developed from 2010 to 2012, when the startup backing it failed. This paper discusses the design decisions of the distributed database, describes interesting parts of the C++ codebase and enumerates lessons learned putting ScalienDB into production at a handful of clients. The source code is available on Github under the AGPL license, but it is no longer developed or maintained. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 22,098 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.