id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2306.11816
Learning to Generate Better Than Your LLM
Reinforcement learning (RL) has emerged as a powerful paradigm for fine-tuning Large Language Models (LLMs) for text generation. In particular, recent LLMs such as ChatGPT and GPT-4 can engage in fluent conversations with users after finetuning with RL. Capitalizing on key properties of text generation, we seek to investigate RL algorithms beyond general purpose algorithms like Proximal Policy Optimization (PPO). In particular, we extend RL algorithms to allow them to interact with a dynamic black-box guide LLM and propose RL with guided feedback (RLGF), a suite of RL algorithms for LLM fine-tuning. We provide two ways for the guide LLM to interact with the LLM to be optimized for maximizing rewards. The guide LLM can generate text which serves as additional starting states for the RL optimization procedure. The guide LLM can also be used to complete the partial sentences generated by the LLM that is being optimized, treating the guide LLM as an expert to imitate and surpass eventually. We experiment on the IMDB positive sentiment, CommonGen, and TL;DR summarization tasks. We show that our RL algorithms achieve higher performance than supervised learning (SL) and the RL baseline PPO, demonstrating the benefit of interaction with the guide LLM. On both CommonGen and TL;DR, we not only outperform our SL baselines but also improve upon PPO across a variety of metrics beyond the one we optimized for. Our code can be found at https://github.com/Cornell-RL/tril.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
374,718
2308.08753
BOTT: Box Only Transformer Tracker for 3D Object Tracking
Tracking 3D objects is an important task in autonomous driving. Classical Kalman Filtering based methods are still the most popular solutions. However, these methods require handcrafted designs in motion modeling and can not benefit from the growing data amounts. In this paper, Box Only Transformer Tracker (BOTT) is proposed to learn to link 3D boxes of the same object from the different frames, by taking all the 3D boxes in a time window as input. Specifically, transformer self-attention is applied to exchange information between all the boxes to learn global-informative box embeddings. The similarity between these learned embeddings can be used to link the boxes of the same object. BOTT can be used for both online and offline tracking modes seamlessly. Its simplicity enables us to significantly reduce engineering efforts required by traditional Kalman Filtering based methods. Experiments show BOTT achieves competitive performance on two largest 3D MOT benchmarks: 69.9 and 66.7 AMOTA on nuScenes validation and test splits, respectively, 56.45 and 59.57 MOTA L2 on Waymo Open Dataset validation and test splits, respectively. This work suggests that tracking 3D objects by learning features directly from 3D boxes using transformers is a simple yet effective way.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
386,018
2401.01301
Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models
Do large language models (LLMs) know the law? These models are increasingly being used to augment legal practice, education, and research, yet their revolutionary potential is threatened by the presence of hallucinations -- textual output that is not consistent with legal facts. We present the first systematic evidence of these hallucinations, documenting LLMs' varying performance across jurisdictions, courts, time periods, and cases. Our work makes four key contributions. First, we develop a typology of legal hallucinations, providing a conceptual framework for future research in this area. Second, we find that legal hallucinations are alarmingly prevalent, occurring between 58% of the time with ChatGPT 4 and 88% with Llama 2, when these models are asked specific, verifiable questions about random federal court cases. Third, we illustrate that LLMs often fail to correct a user's incorrect legal assumptions in a contra-factual question setup. Fourth, we provide evidence that LLMs cannot always predict, or do not always know, when they are producing legal hallucinations. Taken together, our findings caution against the rapid and unsupervised integration of popular LLMs into legal tasks. Even experienced lawyers must remain wary of legal hallucinations, and the risks are highest for those who stand to benefit from LLMs the most -- pro se litigants or those without access to traditional legal resources.
false
false
false
false
true
false
false
false
true
false
false
false
false
true
false
false
false
false
419,306
1811.01549
StNet: Local and Global Spatial-Temporal Modeling for Action Recognition
Despite the success of deep learning for static image understanding, it remains unclear what are the most effective network architectures for the spatial-temporal modeling in videos. In this paper, in contrast to the existing CNN+RNN or pure 3D convolution based approaches, we explore a novel spatial temporal network (StNet) architecture for both local and global spatial-temporal modeling in videos. Particularly, StNet stacks N successive video frames into a \emph{super-image} which has 3N channels and applies 2D convolution on super-images to capture local spatial-temporal relationship. To model global spatial-temporal relationship, we apply temporal convolution on the local spatial-temporal feature maps. Specifically, a novel temporal Xception block is proposed in StNet. It employs a separate channel-wise and temporal-wise convolution over the feature sequence of video. Extensive experiments on the Kinetics dataset demonstrate that our framework outperforms several state-of-the-art approaches in action recognition and can strike a satisfying trade-off between recognition accuracy and model complexity. We further demonstrate the generalization performance of the leaned video representations on the UCF101 dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
112,399
2404.14215
Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction
The task of condensing large chunks of textual information into concise and structured tables has gained attention recently due to the emergence of Large Language Models (LLMs) and their potential benefit for downstream tasks, such as text summarization and text mining. Previous approaches often generate tables that directly replicate information from the text, limiting their applicability in broader contexts, as text-to-table generation in real-life scenarios necessitates information extraction, reasoning, and integration. However, there is a lack of both datasets and methodologies towards this task. In this paper, we introduce LiveSum, a new benchmark dataset created for generating summary tables of competitions based on real-time commentary texts. We evaluate the performances of state-of-the-art LLMs on this task in both fine-tuning and zero-shot settings, and additionally propose a novel pipeline called $T^3$(Text-Tuple-Table) to improve their performances. Extensive experimental results demonstrate that LLMs still struggle with this task even after fine-tuning, while our approach can offer substantial performance gains without explicit training. Further analyses demonstrate that our method exhibits strong generalization abilities, surpassing previous approaches on several other text-to-table datasets. Our code and data can be found at https://github.com/HKUST-KnowComp/LiveSum.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
448,604
2501.09609
Adversarial-Ensemble Kolmogorov Arnold Networks for Enhancing Indoor Wi-Fi Positioning: A Defensive Approach Against Spoofing and Signal Manipulation Attacks
The research presents a study on enhancing the robustness of Wi-Fi-based indoor positioning systems against adversarial attacks. The goal is to improve the positioning accuracy and resilience of these systems under two attack scenarios: Wi-Fi Spoofing and Signal Strength Manipulation. Three models are developed and evaluated: a baseline model (M_Base), an adversarially trained robust model (M_Rob), and an ensemble model (M_Ens). All models utilize a Kolmogorov-Arnold Network (KAN) architecture. The robust model is trained with adversarially perturbed data, while the ensemble model combines predictions from both the base and robust models. Experimental results show that the robust model reduces positioning error by approximately 10% compared to the baseline, achieving 2.03 meters error under Wi-Fi spoofing and 2.00 meters under signal strength manipulation. The ensemble model further outperforms with errors of 2.01 meters and 1.975 meters for the respective attack types. This analysis highlights the effectiveness of adversarial training techniques in mitigating attack impacts. The findings underscore the importance of considering adversarial scenarios in developing indoor positioning systems, as improved resilience can significantly enhance the accuracy and reliability of such systems in mission-critical environments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
525,201
2406.13175
Sparse High Rank Adapters
Low Rank Adaptation (LoRA) has gained massive attention in the recent generative AI research. One of the main advantages of LoRA is its ability to be fused with pretrained models, adding no overhead during inference. However, from a mobile deployment standpoint, we can either avoid inference overhead in the fused mode but lose the ability to switch adapters rapidly, or suffer significant (up to 30% higher) inference latency while enabling rapid switching in the unfused mode. LoRA also exhibits concept-loss when multiple adapters are used concurrently. In this paper, we propose Sparse High Rank Adapters (SHiRA), a new paradigm which incurs no inference overhead, enables rapid switching, and significantly reduces concept-loss. Specifically, SHiRA can be trained by directly tuning only 1-2% of the base model weights while leaving others unchanged. This results in a highly sparse adapter which can be switched directly in the fused mode. We further provide theoretical and empirical insights on how high sparsity in SHiRA can aid multi-adapter fusion by reducing concept loss. Our extensive experiments on LVMs and LLMs demonstrate that finetuning only a small fraction of the parameters in the base model significantly outperforms LoRA while enabling both rapid switching and multi-adapter fusion. Finally, we provide a latency- and memory-efficient SHiRA implementation based on Parameter-Efficient Finetuning (PEFT) Library which trains at nearly the same speed as LoRA while consuming up to 16% lower peak GPU memory, thus making SHiRA easy to adopt for practical use cases. To demonstrate rapid switching benefits during inference, we show that loading SHiRA on a base model can be 5x-16x faster than LoRA fusion on a CPU.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
465,732
2307.10947
Improving Online Lane Graph Extraction by Object-Lane Clustering
Autonomous driving requires accurate local scene understanding information. To this end, autonomous agents deploy object detection and online BEV lane graph extraction methods as a part of their perception stack. In this work, we propose an architecture and loss formulation to improve the accuracy of local lane graph estimates by using 3D object detection outputs. The proposed method learns to assign the objects to centerlines by considering the centerlines as cluster centers and the objects as data points to be assigned a probability distribution over the cluster centers. This training scheme ensures direct supervision on the relationship between lanes and objects, thus leading to better performance. The proposed method improves lane graph estimation substantially over state-of-the-art methods. The extensive ablations show that our method can achieve significant performance improvements by using the outputs of existing 3D object detection methods. Since our method uses the detection outputs rather than detection method intermediate representations, a single model of our method can use any detection method at test time.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
380,749
1703.06708
Complex Number Formulation and Convex Relaxations for Aircraft Conflict Resolution
We present a novel complex number formulation along with tight convex relaxations for the aircraft conflict resolution problem. Our approach combines both speed and heading control and provides global optimality guarantees despite non-convexities in the feasible region. As a side result, we present a new characterization of the conflict separation condition in the form of disjunctive linear constraints. Our formulation features one binary variable per pair of aircraft, is free of trigonometric functions, and captures the non-convexity in a set of quadratic concave constraints. Using our approach, we are able to close a number of open instances and reduce computational time by up to two orders of magnitude on standard instances.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
70,270
2101.04350
Automated Detection of Patellofemoral Osteoarthritis from Knee Lateral View Radiographs Using Deep Learning: Data from the Multicenter Osteoarthritis Study (MOST)
Objective: To assess the ability of imaging-based deep learning to predict radiographic patellofemoral osteoarthritis (PFOA) from knee lateral view radiographs. Design: Knee lateral view radiographs were extracted from The Multicenter Osteoarthritis Study (MOST) (n = 18,436 knees). Patellar region-of-interest (ROI) was first automatically detected, and subsequently, end-to-end deep convolutional neural networks (CNNs) were trained and validated to detect the status of patellofemoral OA. Patellar ROI was detected using deep-learning-based object detection method. Manual PFOA status assessment provided in the MOST dataset was used as a classification outcome for the CNNs. Performance of prediction models was assessed using the area under the receiver operating characteristic curve (ROC AUC) and the average precision (AP) obtained from the precision-recall (PR) curve in the stratified 5-fold cross validation setting. Results: Of the 18,436 knees, 3,425 (19%) had PFOA. AUC and AP for the reference model including age, sex, body mass index (BMI), the total Western Ontario and McMaster Universities Arthritis Index (WOMAC) score, and tibiofemoral Kellgren-Lawrence (KL) grade to predict PFOA were 0.806 and 0.478, respectively. The CNN model that used only image data significantly improved the prediction of PFOA status (ROC AUC= 0.958, AP= 0.862). Conclusion: We present the first machine learning based automatic PFOA detection method. Furthermore, our deep learning based model trained on patella region from knee lateral view radiographs performs better at predicting PFOA than models based on patient characteristics and clinical assessments.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
215,129
2008.00335
V2I Connectivity-Based Dynamic Queue-Jump Lane for Emergency Vehicles: A Deep Reinforcement Learning Approach
Emergency vehicle (EMV) service is a key function of cities and is exceedingly challenging due to urban traffic congestion. A main reason behind EMV service delay is the lack of communication and cooperation between vehicles blocking EMVs. In this paper, we study the improvement of EMV service under V2I connectivity. We consider the establishment of dynamic queue jump lanes (DQJLs) based on real-time coordination of connected vehicles. We develop a novel Markov decision process formulation for the DQJL problem, which explicitly accounts for the uncertainty of drivers' reaction to approaching EMVs. We propose a deep neural network-based reinforcement learning algorithm that efficiently computes the optimal coordination instructions. We also validate our approach on a micro-simulation testbed using Simulation of Urban Mobility (SUMO). Validation results show that with our proposed methodology, the centralized control system saves approximately 15\% EMV passing time than the benchmark system.
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
189,982
2104.07663
Tourist route optimization in the context of Covid-19 pandemic
The paper presents an innovative method for tourist route planning inside a destination. The necessity of reorganizing the tourist routes within a destination comes as an immediate response to the Covid-19 crisis. The implementation of the method inside tourist destinations can be an important advantage in transforming a destination into a safer destination in times of Covid-19 and post-Covid-19. The existing trend of shortening the tourist stay length has been accelerated while the epidemic became a pandemic. Moreover, the wariness for future pandemics has brought to the spotlight the issue of overcrowded attractions inside a destination at certain moments. The method proposed in this paper proposes a backtracking algorithm, more precisely an adaptation of the travelling salesman problem. The method presented aims to facilitate the navigation inside a destination and to revive certain less-visited sightseeing spots inside a destination while facilitating the social distancing measures imposed by Covid-19.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
230,505
2104.10818
XAI-N: Sensor-based Robot Navigation using Expert Policies and Decision Trees
We present a novel sensor-based learning navigation algorithm to compute a collision-free trajectory for a robot in dense and dynamic environments with moving obstacles or targets. Our approach uses deep reinforcement learning-based expert policy that is trained using a sim2real paradigm. In order to increase the reliability and handle the failure cases of the expert policy, we combine with a policy extraction technique to transform the resulting policy into a decision tree format. The resulting decision tree has properties which we use to analyze and modify the policy and improve performance on navigation metrics including smoothness, frequency of oscillation, frequency of immobilization, and obstruction of target. We are able to modify the policy to address these imperfections without retraining, combining the learning power of deep learning with the control of domain-specific algorithms. We highlight the benefits of our algorithm in simulated environments and navigating a Clearpath Jackal robot among moving pedestrians. (Videos at this url: https://gamma.umd.edu/researchdirections/xrl/navviper)
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
231,722
1805.05518
Formal Modelling of Ontologies : An Event-B based Approach Using the Rodin Platform
This paper reports on the results of the French ANR IMPEX research project dealing with making explicit domain knowledge in design models. Ontologies are formalised as theories with sets, axioms, theorems and reasoning rules. They are integrated to design models through an annotation mechanism. Event-B has been chosen as the ground formal modelling technique for all our developments. In this paper, we particularly describe how ontologies are formalised as Event-B theories.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
97,441
2306.00262
Maximal Domain Independent Representations Improve Transfer Learning
The most effective domain adaptation (DA) involves the decomposition of data representation into a domain independent representation (DIRep), and a domain dependent representation (DDRep). A classifier is trained by using the DIRep of the labeled source images. Since the DIRep is domain invariant, the classifier can be "transferred" to make predictions for the target domain with no (or few) labels. However, information useful for classification in the target domain can "hide" in the DDRep in current DA algorithms such as Domain-Separation-Networks (DSN). DSN's weak constraint to enforce orthogonality of DIRep and DDRep, allows this hiding and can result in poor performance. To address this shortcoming, we developed a new algorithm wherein a stronger constraint is imposed to minimize the DDRep by using a KL divergent loss for the DDRep in order to create the maximal DIRep that enhances transfer learning performance. By using synthetic data sets, we show explicitly that depending on initialization DSN with its weaker constraint can lead to sub-optimal solutions with poorer DA performance whereas our algorithm with maximal DIRep is robust against such perturbations. We demonstrate the equal-or-better performance of our approach against state-of-the-art algorithms by using several standard benchmark image datasets including Office. We further highlight the compatibility of our algorithm with pretrained models, extending its applicability and versatility in real-world scenarios.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
369,935
1508.02405
Gait Assessment for Multiple Sclerosis Patients Using Microsoft Kinect
Gait analysis of patients with neurological disorders, including multiple sclerosis (MS), is important for rehabilitation and treatment. The Mircrosoft Kinect sensor, which was developed for motion recognition in gaming applications, is an ideal candidate for an inexpensive system providing the capability for human gait analysis. In this research, we develop a framework to quantify the gait abnormality of MS patients using a Kinect for Windows camera. In addition to the previously introduced gait indices, a novel set of MS gait indices based on the concept of dynamic time warping is introduced. The newly introduced indices can characterize a patient's gait pattern as a whole and quantify a subject's gait distance from the healthy population. We will investigate the correlation of gait indices with the multiple sclerosis walking scale (MSWS) and the clinical ambulation score. This work establishes the feasibility of using the Kinect sensor for clinical gait assessment for MS patients.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
45,898
2405.20671
Position Coupling: Improving Length Generalization of Arithmetic Transformers Using Task Structure
Even for simple arithmetic tasks like integer addition, it is challenging for Transformers to generalize to longer sequences than those encountered during training. To tackle this problem, we propose position coupling, a simple yet effective method that directly embeds the structure of the tasks into the positional encoding of a (decoder-only) Transformer. Taking a departure from the vanilla absolute position mechanism assigning unique position IDs to each of the tokens, we assign the same position IDs to two or more "relevant" tokens; for integer addition tasks, we regard digits of the same significance as in the same position. On the empirical side, we show that with the proposed position coupling, our models trained on 1 to 30-digit additions can generalize up to 200-digit additions (6.67x of the trained length). On the theoretical side, we prove that a 1-layer Transformer with coupled positions can solve the addition task involving exponentially many digits, whereas any 1-layer Transformer without positional information cannot entirely solve it. We also demonstrate that position coupling can be applied to other algorithmic tasks such as Nx2 multiplication and a two-dimensional task.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
459,462
2502.00168
Supervised Quadratic Feature Analysis: An Information Geometry Approach to Dimensionality Reduction
Supervised dimensionality reduction aims to map labeled data to a low-dimensional feature space while maximizing class discriminability. Despite the availability of methods for learning complex non-linear features (e.g. Deep Learning), there is an enduring demand for dimensionality reduction methods that learn linear features due to their interpretability, low computational cost, and broad applicability. However, there is a gap between methods that optimize linear separability (e.g. LDA), and more flexible but computationally expensive methods that optimize over arbitrary class boundaries (e.g. metric-learning methods). Here, we present Supervised Quadratic Feature Analysis (SQFA), a dimensionality reduction method for learning linear features that maximize the differences between class-conditional first- and second-order statistics, which allow for quadratic discrimination. SQFA exploits the information geometry of second-order statistics in the symmetric positive definite manifold. We show that SQFA features support quadratic discriminability in real-world problems. We also provide a theoretical link, based on information geometry, between SQFA and the Quadratic Discriminant Analysis (QDA) classifier.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
529,236
2403.00554
Distributed MPC for autonomous ships on inland waterways with collaborative collision avoidance
This paper presents a distributed solution for the problem of collaborative collision avoidance for autonomous inland waterway ships. A two-layer collision avoidance framework that considers inland waterway traffic regulations is proposed to increase navigational safety for autonomous ships. Our approach allows for modifying traffic rules without changing the collision avoidance algorithm, and is based on a novel formulation of model predictive control (MPC) for collision avoidance of ships. This MPC formulation is designed for inland waterway traffic and can handle complex scenarios. The alternating direction method of multipliers is used as a scheme for exchanging and negotiating intentions among ships. Simulation results show that the proposed algorithm can comply with traffic rules. Furthermore, the proposed algorithm can safely deviate from traffic rules when necessary to increase efficiency in complex scenarios.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
434,021
2111.06537
Multi-Step Budgeted Bayesian Optimization with Unknown Evaluation Costs
Bayesian optimization (BO) is a sample-efficient approach to optimizing costly-to-evaluate black-box functions. Most BO methods ignore how evaluation costs may vary over the optimization domain. However, these costs can be highly heterogeneous and are often unknown in advance. This occurs in many practical settings, such as hyperparameter tuning of machine learning algorithms or physics-based simulation optimization. Moreover, those few existing methods that acknowledge cost heterogeneity do not naturally accommodate a budget constraint on the total evaluation cost. This combination of unknown costs and a budget constraint introduces a new dimension to the exploration-exploitation trade-off, where learning about the cost incurs the cost itself. Existing methods do not reason about the various trade-offs of this problem in a principled way, leading often to poor performance. We formalize this claim by proving that the expected improvement and the expected improvement per unit of cost, arguably the two most widely used acquisition functions in practice, can be arbitrarily inferior with respect to the optimal non-myopic policy. To overcome the shortcomings of existing approaches, we propose the budgeted multi-step expected improvement, a non-myopic acquisition function that generalizes classical expected improvement to the setting of heterogeneous and unknown evaluation costs. Finally, we show that our acquisition function outperforms existing methods in a variety of synthetic and real problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
266,103
2007.05163
Handling Collocations in Hierarchical Latent Tree Analysis for Topic Modeling
Topic modeling has been one of the most active research areas in machine learning in recent years. Hierarchical latent tree analysis (HLTA) has been recently proposed for hierarchical topic modeling and has shown superior performance over state-of-the-art methods. However, the models used in HLTA have a tree structure and cannot represent the different meanings of multiword expressions sharing the same word appropriately. Therefore, we propose a method for extracting and selecting collocations as a preprocessing step for HLTA. The selected collocations are replaced with single tokens in the bag-of-words model before running HLTA. Our empirical evaluation shows that the proposed method led to better performance of HLTA on three of the four data sets tested.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
186,585
2108.10378
Lightweight Multi-person Total Motion Capture Using Sparse Multi-view Cameras
Multi-person total motion capture is extremely challenging when it comes to handle severe occlusions, different reconstruction granularities from body to face and hands, drastically changing observation scales and fast body movements. To overcome these challenges above, we contribute a lightweight total motion capture system for multi-person interactive scenarios using only sparse multi-view cameras. By contributing a novel hand and face bootstrapping algorithm, our method is capable of efficient localization and accurate association of the hands and faces even on severe occluded occasions. We leverage both pose regression and keypoints detection methods and further propose a unified two-stage parametric fitting method for achieving pixel-aligned accuracy. Moreover, for extremely self-occluded poses and close interactions, a novel feedback mechanism is proposed to propagate the pixel-aligned reconstructions into the next frame for more accurate association. Overall, we propose the first light-weight total capture system and achieves fast, robust and accurate multi-person total motion capture performance. The results and experiments show that our method achieves more accurate results than existing methods under sparse-view setups.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
251,876
2201.02718
Multi-Vehicle Control in Roundabouts using Decentralized Game-Theoretic Planning
Safe navigation in dense, urban driving environments remains an open problem and an active area of research. Unlike typical predict-then-plan approaches, game-theoretic planning considers how one vehicle's plan will affect the actions of another. Recent work has demonstrated significant improvements in the time required to find local Nash equilibria in general-sum games with nonlinear objectives and constraints. When applied trivially to driving, these works assume all vehicles in a scene play a game together, which can result in intractable computation times for dense traffic. We formulate a decentralized approach to game-theoretic planning by assuming that agents only play games within their observational vicinity, which we believe to be a more reasonable assumption for human driving. Games are played in parallel for all strongly connected components of an interaction graph, significantly reducing the number of players and constraints in each game, and therefore the time required for planning. We demonstrate that our approach can achieve collision-free, efficient driving in urban environments by comparing performance against an adaptation of the Intelligent Driver Model and centralized game-theoretic planning when navigating roundabouts in the INTERACTION dataset. Our implementation is available at http://github.com/sisl/DecNashPlanning.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
274,623
1510.00083
Optimizing Energy Storage Participation in Emerging Power Markets
The growing amount of intermittent renewables in power generation creates challenges for real-time matching of supply and demand in the power grid. Emerging ancillary power markets provide new incentives to consumers (e.g., electrical vehicles, data centers, and others) to perform demand response to help stabilize the electricity grid. A promising class of potential demand response providers includes energy storage systems (ESSs). This paper evaluates the benefits of using various types of novel ESS technologies for a variety of emerging smart grid demand response programs, such as regulation services reserves (RSRs), contingency reserves, and peak shaving. We model, formulate and solve optimization problems to maximize the net profit of ESSs in providing each demand response. Our solution selects the optimal power and energy capacities of the ESS, determines the optimal reserve value to provide as well as the ESS real-time operational policy for program participation. Our results highlight that applying ultra-capacitors and flywheels in RSR has the potential to be up to 30 times more profitable than using common battery technologies such as LI and LA batteries for peak shaving.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
47,485
1705.03366
Frequency Switching for Simultaneous Wireless Information and Power Transfer
A new frequency switching receiver structure is proposed for simultaneous wireless information and power transfer in multi-carrier communication systems. Each subcarrier is switched to either the energy harvesting unit or the information decoding unit, according to the optimal subcarrier allocation. To implement the system, one-bit feedback is required for each subcarrier. Two optimization problems are defined, converted to binary knapsack problems, and solved using dynamic programming approaches. Upper bounds are obtained using continuous relaxations. Power allocation is integrated to further increase the performance. Numerical studies show that the proposed frequency switching based model is better than existing models in a wide range of parameters.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
73,170
1810.06729
Robust Neural Machine Translation with Joint Textual and Phonetic Embedding
Neural machine translation (NMT) is notoriously sensitive to noises, but noises are almost inevitable in practice. One special kind of noise is the homophone noise, where words are replaced by other words with similar pronunciations. We propose to improve the robustness of NMT to homophone noises by 1) jointly embedding both textual and phonetic information of source sentences, and 2) augmenting the training dataset with homophone noises. Interestingly, to achieve better translation quality and more robustness, we found that most (though not all) weights should be put on the phonetic rather than textual information. Experiments show that our method not only significantly improves the robustness of NMT to homophone noises, but also surprisingly improves the translation quality on some clean test sets.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
110,487
2006.03206
Achieving High Throughput and Elasticity in a Larger-than-Memory Store
Millions of sensors, mobile applications and machines now generate billions of events. Specialized many-core key-value stores (KVSs) can ingest and index these events at high rates (over 100 Mops/s on one machine) if events are generated on the same machine; however, to be practical and cost-effective they must ingest events over the network and scale across cloud resources elastically. We present Shadowfax, a new distributed KVS based on FASTER, that transparently spans DRAM, SSDs, and cloud blob storage while serving 130 Mops/s/VM over commodity Azure VMs using conventional Linux TCP. Beyond high single-VM performance, Shadowfax uses a unique approach to distributed reconfiguration that avoids any server-side key ownership checks or cross-core coordination both during normal operation and migration. Hence, Shadowfax can shift load in 17 s to improve system throughput by 10 Mops/s with little disruption. Compared to the state-of-the-art, it has 8x better throughput (than Seastar+memcached) and avoids costly I/O to move cold data during migration. On 12 machines, Shadowfax retains its high throughput to perform 930 Mops/s, which, to the best of our knowledge, is the highest reported throughput for a distributed KVS used for large-scale data ingestion and indexing.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
180,245
1401.0734
Repairable Fountain Codes
We introduce a new family of Fountain codes that are systematic and also have sparse parities. Given an input of $k$ symbols, our codes produce an unbounded number of output symbols, generating each parity independently by linearly combining a logarithmic number of randomly selected input symbols. The construction guarantees that for any $\epsilon>0$ accessing a random subset of $(1+\epsilon)k$ encoded symbols, asymptotically suffices to recover the $k$ input symbols with high probability. Our codes have the additional benefit of logarithmic locality: a single lost symbol can be repaired by accessing a subset of $O(\log k)$ of the remaining encoded symbols. This is a desired property for distributed storage systems where symbols are spread over a network of storage nodes. Beyond recovery upon loss, local reconstruction provides an efficient alternative for reading symbols that cannot be accessed directly. In our code, a logarithmic number of disjoint local groups is associated with each systematic symbol, allowing multiple parallel reads. Our main mathematical contribution involves analyzing the rank of sparse random matrices with specific structure over finite fields. We rely on establishing that a new family of sparse random bipartite graphs have perfect matchings with high probability.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
29,583
2407.11089
Explainable bank failure prediction models: Counterfactual explanations to reduce the failure risk
The accuracy and understandability of bank failure prediction models are crucial. While interpretable models like logistic regression are favored for their explainability, complex models such as random forest, support vector machines, and deep learning offer higher predictive performance but lower explainability. These models, known as black boxes, make it difficult to derive actionable insights. To address this challenge, using counterfactual explanations is suggested. These explanations demonstrate how changes in input variables can alter the model output and suggest ways to mitigate bank failure risk. The key challenge lies in selecting the most effective method for generating useful counterfactuals, which should demonstrate validity, proximity, sparsity, and plausibility. The paper evaluates several counterfactual generation methods: WhatIf, Multi Objective, and Nearest Instance Counterfactual Explanation, and also explores resampling methods like undersampling, oversampling, SMOTE, and the cost sensitive approach to address data imbalance in bank failure prediction in the US. The results indicate that the Nearest Instance Counterfactual Explanation method yields higher quality counterfactual explanations, mainly using the cost sensitive approach. Overall, the Multi Objective Counterfactual and Nearest Instance Counterfactual Explanation methods outperform others regarding validity, proximity, and sparsity metrics, with the cost sensitive approach providing the most desirable counterfactual explanations. These findings highlight the variability in the performance of counterfactual generation methods across different balancing strategies and machine learning models, offering valuable strategies to enhance the utility of black box bank failure prediction models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
473,301
2009.09266
Humans learn too: Better Human-AI Interaction using Optimized Human Inputs
Humans rely more and more on systems with AI components. The AI community typically treats human inputs as a given and optimizes AI models only. This thinking is one-sided and it neglects the fact that humans can learn, too. In this work, human inputs are optimized for better interaction with an AI model while keeping the model fixed. The optimized inputs are accompanied by instructions on how to create them. They allow humans to save time and cut on errors, while keeping required changes to original inputs limited. We propose continuous and discrete optimization methods modifying samples in an iterative fashion. Our quantitative and qualitative evaluation including a human study on different hand-generated inputs shows that the generated proposals lead to lower error rates, require less effort to create and differ only modestly from the original samples.
true
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
196,518
1605.04951
Viziometrics: Analyzing Visual Information in the Scientific Literature
Scientific results are communicated visually in the literature through diagrams, visualizations, and photographs. These information-dense objects have been largely ignored in bibliometrics and scientometrics studies when compared to citations and text. In this paper, we use techniques from computer vision and machine learning to classify more than 8 million figures from PubMed into 5 figure types and study the resulting patterns of visual information as they relate to impact. We find that the distribution of figures and figure types in the literature has remained relatively constant over time, but can vary widely across field and topic. Remarkably, we find a significant correlation between scientific impact and the use of visual information, where higher impact papers tend to include more diagrams, and to a lesser extent more plots and photographs. To explore these results and other ways of extracting this visual information, we have built a visual browser to illustrate the concept and explore design alternatives for supporting viziometric analysis and organizing visual information. We use these results to articulate a new research agenda -- viziometrics -- to study the organization and presentation of visual information in the scientific literature.
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
true
55,932
2112.01901
The Box Size Confidence Bias Harms Your Object Detector
Countless applications depend on accurate predictions with reliable confidence estimates from modern object detectors. It is well known, however, that neural networks including object detectors produce miscalibrated confidence estimates. Recent work even suggests that detectors' confidence predictions are biased with respect to object size and position, but it is still unclear how this bias relates to the performance of the affected object detectors. We formally prove that the conditional confidence bias is harming the expected performance of object detectors and empirically validate these findings. Specifically, we demonstrate how to modify the histogram binning calibration to not only avoid performance impairment but also improve performance through conditional confidence calibration. We further find that the confidence bias is also present in detections generated on the training data of the detector, which we leverage to perform our de-biasing without using additional data. Moreover, Test Time Augmentation magnifies this bias, which results in even larger performance gains from our calibration method. Finally, we validate our findings on a diverse set of object detection architectures and show improvements of up to 0.6 mAP and 0.8 mAP50 without extra data or training.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
269,664
2305.00163
Enhancing Video Super-Resolution via Implicit Resampling-based Alignment
In video super-resolution, it is common to use a frame-wise alignment to support the propagation of information over time. The role of alignment is well-studied for low-level enhancement in video, but existing works overlook a critical step -- resampling. We show through extensive experiments that for alignment to be effective, the resampling should preserve the reference frequency spectrum while minimizing spatial distortions. However, most existing works simply use a default choice of bilinear interpolation for resampling even though bilinear interpolation has a smoothing effect and hinders super-resolution. From these observations, we propose an implicit resampling-based alignment. The sampling positions are encoded by a sinusoidal positional encoding, while the value is estimated with a coordinate network and a window-based cross-attention. We show that bilinear interpolation inherently attenuates high-frequency information while an MLP-based coordinate network can approximate more frequencies. Experiments on synthetic and real-world datasets show that alignment with our proposed implicit resampling enhances the performance of state-of-the-art frameworks with minimal impact on both compute and parameters.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
361,235
2311.06954
Multimodal Learning of Soft Robot Dynamics using Differentiable Filters
Differentiable Filters, as recursive Bayesian estimators, possess the ability to learn complex dynamics by deriving state transition and measurement models exclusively from data. This data-driven approach eliminates the reliance on explicit analytical models while maintaining the essential algorithmic components of the filtering process. However, the gain mechanism remains non-differentiable, limiting its adaptability to specific task requirements and contextual variations. To address this limitation, this paper introduces an innovative approach called {\alpha}-MDF (Attention-based Multimodal Differentiable Filter). {\alpha}-MDF leverages modern attention mechanisms to learn multimodal latent representations for accurate state estimation in soft robots. By incorporating attention mechanisms, {\alpha}-MDF offers the flexibility to tailor the gain mechanism to the unique nature of the task and context. The effectiveness of {\alpha}-MDF is validated through real-world state estimation tasks on soft robots. Our experimental results demonstrate significant reductions in state estimation errors, consistently surpassing differentiable filter baselines by up to 45% in the domain of soft robotics.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
407,138
2006.07834
Multi-Miner: Object-Adaptive Region Mining for Weakly-Supervised Semantic Segmentation
Object region mining is a critical step for weakly-supervised semantic segmentation. Most recent methods mine the object regions by expanding the seed regions localized by class activation maps. They generally do not consider the sizes of objects and apply a monotonous procedure to mining all the object regions. Thus their mined regions are often insufficient in number and scale for large objects, and on the other hand easily contaminated by surrounding backgrounds for small objects. In this paper, we propose a novel multi-miner framework to perform a region mining process that adapts to diverse object sizes and is thus able to mine more integral and finer object regions. Specifically, our multi-miner leverages a parallel modulator to check whether there are remaining object regions for each single object, and guide a category-aware generator to mine the regions of each object independently. In this way, the multi-miner adaptively takes more steps for large objects and fewer steps for small objects. Experiment results demonstrate that the multi-miner offers better region mining results and helps achieve better segmentation performance than state-of-the-art weakly-supervised semantic segmentation methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
181,967
2308.04659
A hyper-distance-based method for hypernetwork comparison
Hypernetwork is a useful way to depict multiple connections between nodes, making it an ideal tool for representing complex relationships in network science. In recent years, there has been a marked increase in studies on hypernetworks, however, the comparison of the difference between two hypernetworks has been given less attention. This paper proposes a hyper-distance-based method (HD) for comparing hypernetworks. This method takes into account high-order information, such as the high-order distance between nodes. The experiments carried out on synthetic hypernetworks have shown that HD is capable of distinguishing between hypernetworks generated with different parameters, and it is successful in the classification of hypernetworks. Furthermore, HD outperforms current state-of-the-art baselines to distinguish empirical hypernetworks when hyperedges are disrupted.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
384,496
1912.09025
Matrix-Calibration-Based Cascaded Channel Estimation for Reconfigurable Intelligent Surface Assisted Multiuser MIMO
Reconfigurable intelligent surface (RIS) is envisioned to be an essential component of the paradigm for beyond 5G networks as it can potentially provide similar or higher array gains with much lower hardware cost and energy consumption compared with the massive multiple-input multiple-output (MIMO) technology. In this paper, we focus on one of the fundamental challenges, namely the channel acquisition, in an RIS-assisted multiuser MIMO system. The state-of-the-art channel acquisition approach in such a system with fully passive RIS elements estimates the cascaded transmitter-to-RIS and RIS-to-receiver channels by adopting excessively long training sequences. To estimate the cascaded channels with an affordable training overhead, we formulate the channel estimation problem in the RIS-assisted multiuser MIMO system as a matrix-calibration based matrix factorization task. By exploiting the information on the slow-varying channel components and the hidden channel sparsity, we propose a novel message-passing based algorithm to factorize the cascaded channels. Furthermore, we present an analytical framework to characterize the theoretical performance bound of the proposed estimator in the large-system limit. Finally, we conduct simulations to verify the high accuracy and efficiency of the proposed algorithm.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
157,989
2202.13203
Dropout can Simulate Exponential Number of Models for Sample Selection Techniques
Following Coteaching, generally in the literature, two models are used in sample selection based approaches for training with noisy labels. Meanwhile, it is also well known that Dropout when present in a network trains an ensemble of sub-networks. We show how to leverage this property of Dropout to train an exponential number of shared models, by training a single model with Dropout. We show how we can modify existing two model-based sample selection methodologies to use an exponential number of shared models. Not only is it more convenient to use a single model with Dropout, but this approach also combines the natural benefits of Dropout with that of training an exponential number of models, leading to improved results.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
282,526
1408.0259
Permutation Trellis Coded Multi-level FSK Signaling to Mitigate Primary User Interference in Cognitive Radio Networks
We employ Permutation Trellis Code (PTC) based multi-level Frequency Shift Keying signaling to mitigate the impact of Primary Users (PUs) on the performance of Secondary Users (SUs) in Cognitive Radio Networks (CRNs). The PUs are assumed to be dynamic in that they appear intermittently and stay active for an unknown duration. Our approach is based on the use of PTC combined with multi-level FSK modulation so that an SU can improve its data rate by increasing its transmission bandwidth while operating at low power and not creating destructive interference for PUs. We evaluate system performance by obtaining an approximation for the actual Bit Error Rate (BER) using properties of the Viterbi decoder and carry out a thorough performance analysis in terms of BER and throughput. The results show that the proposed coded system achieves i) robustness by ensuring that SUs have stable throughput in the presence of heavy PU interference and ii) improved resiliency of SU links to interference in the presence of multiple dynamic PUs.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
35,073
2311.16513
Fine-grained Appearance Transfer with Diffusion Models
Image-to-image translation (I2I), and particularly its subfield of appearance transfer, which seeks to alter the visual appearance between images while maintaining structural coherence, presents formidable challenges. Despite significant advancements brought by diffusion models, achieving fine-grained transfer remains complex, particularly in terms of retaining detailed structural elements and ensuring information fidelity. This paper proposes an innovative framework designed to surmount these challenges by integrating various aspects of semantic matching, appearance transfer, and latent deviation. A pivotal aspect of our approach is the strategic use of the predicted $x_0$ space by diffusion models within the latent space of diffusion processes. This is identified as a crucial element for the precise and natural transfer of fine-grained details. Our framework exploits this space to accomplish semantic alignment between source and target images, facilitating mask-wise appearance transfer for improved feature acquisition. A significant advancement of our method is the seamless integration of these features into the latent space, enabling more nuanced latent deviations without necessitating extensive model retraining or fine-tuning. The effectiveness of our approach is demonstrated through extensive experiments, which showcase its ability to adeptly handle fine-grained appearance transfers across a wide range of categories and domains. We provide our code at https://github.com/babahui/Fine-grained-Appearance-Transfer
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
410,936
2010.12247
An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits
In the contextual linear bandit setting, algorithms built on the optimism principle fail to exploit the structure of the problem and have been shown to be asymptotically suboptimal. In this paper, we follow recent approaches of deriving asymptotically optimal algorithms from problem-dependent regret lower bounds and we introduce a novel algorithm improving over the state-of-the-art along multiple dimensions. We build on a reformulation of the lower bound, where context distribution and exploration policy are decoupled, and we obtain an algorithm robust to unbalanced context distributions. Then, using an incremental primal-dual approach to solve the Lagrangian relaxation of the lower bound, we obtain a scalable and computationally efficient algorithm. Finally, we remove forced exploration and build on confidence intervals of the optimization problem to encourage a minimum level of exploration that is better adapted to the problem structure. We demonstrate the asymptotic optimality of our algorithm, while providing both problem-dependent and worst-case finite-time regret guarantees. Our bounds scale with the logarithm of the number of arms, thus avoiding the linear dependence common in all related prior works. Notably, we establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits. Finally, we verify that our algorithm obtains better empirical performance than state-of-the-art baselines.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
202,623
1911.08554
Classification as Decoder: Trading Flexibility for Control in Medical Dialogue
Generative seq2seq dialogue systems are trained to predict the next word in dialogues that have already occurred. They can learn from large unlabeled conversation datasets, build a deeper understanding of conversational context, and generate a wide variety of responses. This flexibility comes at the cost of control, a concerning tradeoff in doctor/patient interactions. Inaccuracies, typos, or undesirable content in the training data will be reproduced by the model at inference time. We trade a small amount of labeling effort and some loss of response variety in exchange for quality control. More specifically, a pretrained language model encodes the conversational context, and we finetune a classification head to map an encoded conversational context to a response class, where each class is a noisily labeled group of interchangeable responses. Experts can update these exemplar responses over time as best practices change without retraining the classifier or invalidating old training data. Expert evaluation of 775 unseen doctor/patient conversations shows that only 12% of the discriminative model's responses are worse than the what the doctor ended up writing, compared to 18% for the generative model.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
154,217
2406.10229
Quantifying Variance in Evaluation Benchmarks
Evaluation benchmarks are the cornerstone of measuring capabilities of large language models (LLMs), as well as driving progress in said capabilities. Originally designed to make claims about capabilities (or lack thereof) in fully pretrained models, evaluation benchmarks are now also extensively used to decide between various training choices. Despite this widespread usage, we rarely quantify the variance in our evaluation benchmarks, which dictates whether differences in performance are meaningful. Here, we define and measure a range of metrics geared towards measuring variance in evaluation benchmarks, including seed variance across initialisations, and monotonicity during training. By studying a large number of models -- both openly available and pretrained from scratch -- we provide empirical estimates for a variety of variance metrics, with considerations and recommendations for practitioners. We also evaluate the utility and tradeoffs of continuous versus discrete performance measures and explore options for better understanding and reducing this variance. We find that simple changes, such as framing choice tasks (like MMLU) as completion tasks, can often reduce variance for smaller scale ($\sim$7B) models, while more involved methods inspired from human testing literature (such as item analysis and item response theory) struggle to meaningfully reduce variance. Overall, our work provides insights into variance in evaluation benchmarks, suggests LM-specific techniques to reduce variance, and more generally encourages practitioners to carefully factor in variance when comparing models.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
464,290
2501.12706
REX: Causal Discovery based on Machine Learning and Explainability techniques
Explainability techniques hold significant potential for enhancing the causal discovery process, which is crucial for understanding complex systems in areas like healthcare, economics, and artificial intelligence. However, no causal discovery methods currently incorporate explainability into their models to derive causal graphs. Thus, in this paper we explore this innovative approach, as it offers substantial potential and represents a promising new direction worth investigating. Specifically, we introduce REX, a causal discovery method that leverages machine learning (ML) models coupled with explainability techniques, specifically Shapley values, to identify and interpret significant causal relationships among variables. Comparative evaluations on synthetic datasets comprising continuous tabular data reveal that REX outperforms state-of-the-art causal discovery methods across diverse data generation processes, including non-linear and additive noise models. Moreover, REX was tested on the Sachs single-cell protein-signaling dataset, achieving a precision of 0.952 and recovering key causal relationships with no incorrect edges. Taking together, these results showcase REX's effectiveness in accurately recovering true causal structures while minimizing false positive predictions, its robustness across diverse datasets, and its applicability to real-world problems. By combining ML and explainability techniques with causal discovery, REX bridges the gap between predictive modeling and causal inference, offering an effective tool for understanding complex causal structures. REX is publicly available at https://github.com/renero/causalgraph.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
526,411
2304.00201
Precoder Design for Massive MIMO Downlink with Matrix Manifold Optimization
We investigate the weighted sum-rate (WSR) maximization linear precoder design for massive multiple-input multiple-output (MIMO) downlink. We consider a single-cell system with multiple users and propose a unified matrix manifold optimization framework applicable to total power constraint (TPC), per-user power constraint (PUPC) and per-antenna power constraint (PAPC). We prove that the precoders under TPC, PUPC and PAPC are on distinct Riemannian submanifolds, and transform the constrained problems in Euclidean space to unconstrained ones on manifolds. In accordance with this, we derive Riemannian ingredients, including orthogonal projection, Riemannian gradient, Riemannian Hessian, retraction and vector transport, which are needed for precoder design in the matrix manifold framework. Then, Riemannian design methods using Riemannian steepest descent, Riemannian conjugate gradient and Riemannian trust region are provided to design the WSR-maximization precoders under TPC, PUPC or PAPC. Riemannian methods do not involve the inverses of the large dimensional matrices during the iterations, reducing the computational complexities of the algorithms. Complexity analyses and performance simulations demonstrate the advantages of the proposed precoder design.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
355,594
2410.17439
Evaluating AI-Generated Essays with GRE Analytical Writing Assessment
The recent revolutionary advance in generative AI enables the generation of realistic and coherent texts by large language models (LLMs). Despite many existing evaluation metrics on the quality of the generated texts, there is still a lack of rigorous assessment of how well LLMs perform in complex and demanding writing assessments. This study examines essays generated by ten leading LLMs for the analytical writing assessment of the Graduate Record Exam (GRE). We assessed these essays using both human raters and the e-rater automated scoring engine as used in the GRE scoring pipeline. Notably, the top-performing Gemini and GPT-4o received an average score of 4.78 and 4.67, respectively, falling between "generally thoughtful, well-developed analysis of the issue and conveys meaning clearly" and "presents a competent analysis of the issue and conveys meaning with acceptable clarity" according to the GRE scoring guideline. We also evaluated the detection accuracy of these essays, with detectors trained on essays generated by the same and different LLMs.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
501,459
2208.14876
NestedFormer: Nested Modality-Aware Transformer for Brain Tumor Segmentation
Multi-modal MR imaging is routinely used in clinical practice to diagnose and investigate brain tumors by providing rich complementary information. Previous multi-modal MRI segmentation methods usually perform modal fusion by concatenating multi-modal MRIs at an early/middle stage of the network, which hardly explores non-linear dependencies between modalities. In this work, we propose a novel Nested Modality-Aware Transformer (NestedFormer) to explicitly explore the intra-modality and inter-modality relationships of multi-modal MRIs for brain tumor segmentation. Built on the transformer-based multi-encoder and single-decoder structure, we perform nested multi-modal fusion for high-level representations of different modalities and apply modality-sensitive gating (MSG) at lower scales for more effective skip connections. Specifically, the multi-modal fusion is conducted in our proposed Nested Modality-aware Feature Aggregation (NMaFA) module, which enhances long-term dependencies within individual modalities via a tri-orientated spatial-attention transformer, and further complements key contextual information among modalities via a cross-modality attention transformer. Extensive experiments on BraTS2020 benchmark and a private meningiomas segmentation (MeniSeg) dataset show that the NestedFormer clearly outperforms the state-of-the-arts. The code is available at https://github.com/920232796/NestedFormer.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
315,444
2412.17916
Data-Driven Priors in the Maximum Entropy on the Mean Method for Linear Inverse Problems
We establish the theoretical framework for implementing the maximumn entropy on the mean (MEM) method for linear inverse problems in the setting of approximate (data-driven) priors. We prove a.s. convergence for empirical means and further develop general estimates for the difference between the MEM solutions with different priors $\mu$ and $\nu$ based upon the epigraphical distance between their respective log-moment generating functions. These estimates allow us to establish a rate of convergence in expectation for empirical means. We illustrate our results with denoising on MNIST and Fashion-MNIST data sets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
520,171
2009.14794
Rethinking Attention with Performers
We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness. To approximate softmax attention-kernels, Performers use a novel Fast Attention Via positive Orthogonal Random features approach (FAVOR+), which may be of independent interest for scalable kernel methods. FAVOR+ can be also used to efficiently model kernelizable attention mechanisms beyond softmax. This representational power is crucial to accurately compare softmax with other kernels for the first time on large-scale tasks, beyond the reach of regular Transformers, and investigate optimal attention-kernels. Performers are linear architectures fully compatible with regular Transformers and with strong theoretical guarantees: unbiased or nearly-unbiased estimation of the attention matrix, uniform convergence and low estimation variance. We tested Performers on a rich set of tasks stretching from pixel-prediction through text models to protein sequence modeling. We demonstrate competitive results with other examined efficient sparse and dense attention methods, showcasing effectiveness of the novel attention-learning paradigm leveraged by Performers.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
198,143
2111.12055
Generating GPU Compiler Heuristics using Reinforcement Learning
GPU compilers are complex software programs with many optimizations specific to target hardware. These optimizations are often controlled by heuristics hand-designed by compiler experts using time- and resource-intensive processes. In this paper, we developed a GPU compiler autotuning framework that uses off-policy deep reinforcement learning to generate heuristics that improve the frame rates of graphics applications. Furthermore, we demonstrate the resilience of these learned heuristics to frequent compiler updates by analyzing their stability across a year of code check-ins without retraining. We show that our machine learning-based compiler autotuning framework matches or surpasses the frame rates for 98% of graphics benchmarks with an average uplift of 1.6% up to 15.8%.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
267,852
1902.06450
Self-Attention Aligner: A Latency-Control End-to-End Model for ASR Using Self-Attention Network and Chunk-Hopping
Self-attention network, an attention-based feedforward neural network, has recently shown the potential to replace recurrent neural networks (RNNs) in a variety of NLP tasks. However, it is not clear if the self-attention network could be a good alternative of RNNs in automatic speech recognition (ASR), which processes the longer speech sequences and may have online recognition requirements. In this paper, we present a RNN-free end-to-end model: self-attention aligner (SAA), which applies the self-attention networks to a simplified recurrent neural aligner (RNA) framework. We also propose a chunk-hopping mechanism, which enables the SAA model to encode on segmented frame chunks one after another to support online recognition. Experiments on two Mandarin ASR datasets show the replacement of RNNs by the self-attention networks yields a 8.4%-10.2% relative character error rate (CER) reduction. In addition, the chunk-hopping mechanism allows the SAA to have only a 2.5% relative CER degradation with a 320ms latency. After jointly training with a self-attention network language model, our SAA model obtains further error rate reduction on multiple datasets. Especially, it achieves 24.12% CER on the Mandarin ASR benchmark (HKUST), exceeding the best end-to-end model by over 2% absolute CER.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
121,769
2401.01579
An Invariant Information Geometric Method for High-Dimensional Online Optimization
Sample efficiency is crucial in optimization, particularly in black-box scenarios characterized by expensive evaluations and zeroth-order feedback. When computing resources are plentiful, Bayesian optimization is often favored over evolution strategies. In this paper, we introduce a full invariance oriented evolution strategies algorithm, derived from its corresponding framework, that effectively rivals the leading Bayesian optimization method in tasks with dimensions at the upper limit of Bayesian capability. Specifically, we first build the framework InvIGO that fully incorporates historical information while retaining the full invariant and computational complexity. We then exemplify InvIGO on multi-dimensional Gaussian, which gives an invariant and scalable optimizer SynCMA . The theoretical behavior and advantages of our algorithm over other Gaussian-based evolution strategies are further analyzed. Finally, We benchmark SynCMA against leading algorithms in Bayesian optimization and evolution strategies on various high dimension tasks, in cluding Mujoco locomotion tasks, rover planning task and synthetic functions. In all scenarios, SynCMA demonstrates great competence, if not dominance, over other algorithms in sample efficiency, showing the underdeveloped potential of property oriented evolution strategies.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
419,418
1710.04979
Fundamental Limitations in Performance and Interpretability of Common Planar Rigid-Body Contact Models
The ability to reason about and predict the outcome of contacts is paramount to the successful execution of many robot tasks. Analytical rigid-body contact models are used extensively in planning and control due to their computational efficiency and simplicity, yet despite their prevalence, little if any empirical comparison of these models has been made and it is unclear how well they approximate contact outcomes. In this paper, we first formulate a system identification approach for six commonly used contact models in the literature, and use the proposed method to find parameters for an experimental data-set of impacts. Next, we compare the models empirically, and establish a task specific upper bound on the performance of the models and the rigid-body contact model paradigm. We highlight the limitations of these models, salient failure modes, and the care that should be taken in parameter selection, which are ultimately difficult to give a physical interpretation.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
82,560
2111.10899
Identification of Low Rank Vector Processes
We study modeling and identification of stationary processes with a spectral density matrix of low rank. Equivalently, we consider processes having an innovation of reduced dimension for which Prediction Error Methods (PEM) algorithms are not directly applicable. We show that these processes admit a special feedback structure with a deterministic feedback channel which can be used to split the identification in two steps, one of which can be based on standard algorithms while the other is based on a deterministic least squares fit. Identifiability of the feedback system is analyzed and a unique identifiable structure is characterized. Simulations show that the proposed procedure works well in some simple examples.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
267,483
2412.06461
Ranked from Within: Ranking Large Multimodal Models for Visual Question Answering Without Labels
As large multimodal models (LMMs) are increasingly deployed across diverse applications, the need for adaptable, real-world model ranking has become paramount. Traditional evaluation methods are largely dataset-centric, relying on fixed, labeled datasets and supervised metrics, which are resource-intensive and may lack generalizability to novel scenarios, highlighting the importance of unsupervised ranking. In this work, we explore unsupervised model ranking for LMMs by leveraging their uncertainty signals, such as softmax probabilities. We evaluate state-of-the-art LMMs (e.g., LLaVA) across visual question answering benchmarks, analyzing how uncertainty-based metrics can reflect model performance. Our findings show that uncertainty scores derived from softmax distributions provide a robust, consistent basis for ranking models across varied tasks. This finding enables the ranking of LMMs on real-world, unlabeled data for visual question answering, providing a practical approach for selecting models across diverse domains without requiring manual annotation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
515,253
1308.1162
Increasing Knowledge Worker Efficiency through a "Virtual Mirror" of the Social Network
In this paper we introduce a case study describing the combination of manual survey-based and e-mail-based social network analysis. The goal of the project was to increase collaboration efficiency in a team of consultants of a major high tech manufacturer. By analyzing the social network of a team of 42 consultants and comparing it with their utilization as the dependent variable, their efficiency in working together was improved in various way by bridging structure holes and eliminating bottlenecks, reducing stress for overburdened individuals, connecting isolated individuals and identifying the best network structures for high utilization and increased job satisfaction.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
26,282
1511.04919
Tales told by coloured tangles
Tangle machines are a topologically inspired diagrammatic formalism to describe information flow in networks. This paper begins with an expository account of tangle machines motivated by the problem of describing `covariance intersection' fusion of Gaussian estimators in networks. It then gives two examples in which tangle machines tell stories of adiabatic quantum computations, and discusses learning tangle machines from data.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
48,968
1911.07967
DLBricks: Composable Benchmark Generation to Reduce Deep Learning Benchmarking Effort on CPUs (Extended)
The past few years have seen a surge of applying Deep Learning (DL) models for a wide array of tasks such as image classification, object detection, machine translation, etc. While DL models provide an opportunity to solve otherwise intractable tasks, their adoption relies on them being optimized to meet latency and resource requirements. Benchmarking is a key step in this process but has been hampered in part due to the lack of representative and up-to-date benchmarking suites. This is exacerbated by the fast-evolving pace of DL models. This paper proposes DLBricks, a composable benchmark generation design that reduces the effort of developing, maintaining, and running DL benchmarks on CPUs. DLBricks decomposes DL models into a set of unique runnable networks and constructs the original model's performance using the performance of the generated benchmarks. DLBricks leverages two key observations: DL layers are the performance building blocks of DL models and layers are extensively repeated within and across DL models. Since benchmarks are generated automatically and the benchmarking time is minimized, DLBricks can keep up-to-date with the latest proposed models, relieving the pressure of selecting representative DL models. Moreover, DLBricks allows users to represent proprietary models within benchmark suites. We evaluate DLBricks using $50$ MXNet models spanning $5$ DL tasks on $4$ representative CPU systems. We show that DLBricks provides an accurate performance estimate for the DL models and reduces the benchmarking time across systems (e.g. within $95\%$ accuracy and up to $4.4\times$ benchmarking time speedup on Amazon EC2 c5.xlarge).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
154,033
1805.06298
SAVERS: SAR ATR with Verification Support Based on Convolutional Neural Network
We propose a new convolutional neural network (CNN) which performs coarse and fine segmentation for end-to-end synthetic aperture radar (SAR) automatic target recognition (ATR) system. In recent years, many CNNs for SAR ATR using deep learning have been proposed, but most of them classify target classes from fixed size target chips extracted from SAR imagery. On the other hand, we proposed the CNN which outputs the score of the multiple target classes and a background class for each pixel from the SAR imagery of arbitrary size and multiple targets as fine segmentation. However, it was necessary for humans to judge the CNN segmentation result. In this report, we propose a CNN called SAR ATR with verification support (SAVERS), which performs region-wise (i.e. coarse) segmentation and pixel-wise segmentation. SAVERS discriminates between target and non-target, and classifies multiple target classes and non-target class by coarse segmentation. This report describes the evaluation results of SAVERS using the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
97,577
2211.06973
A Variable Node Design with Check Node Aware Quantization Leveraging 2-Bit LDPC Decoding
For improving coarsely quantized decoding of LDPC codes, we propose a check node aware design of the variable node update. In contrast to previous works, we optimize the variable node to explicitly maximize the mutual information preserved in the check-to-variable instead of the variable-to-check node messages. The extended optimization leads to a significantly different solution for the compression operation at the variable node. Simulation results for regular LDPC codes confirm that the check node aware design, especially for very coarse quantization with 2- or 3-bit messages, achieves performance gains of up to 0.2 dB - without additional hardware costs. We also show that the 2-bit message resolution enables a very efficient implementation of the check node update, which requires only 2/9 of the 3-bit check node's transistor count and reduces the signal propagation delay by a factor of 4.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
330,085
1810.01345
NimbRo Rescue: Solving Disaster-Response Tasks through Mobile Manipulation Robot Momaro
Robots that solve complex tasks in environments too dangerous for humans to enter are desperately needed, e.g. for search and rescue applications. We describe our mobile manipulation robot Momaro, with which we participated successfully in the DARPA Robotics Challenge. It features a unique locomotion design with four legs ending in steerable wheels, which allows it both to drive omnidirectionally and to step over obstacles or climb. Furthermore, we present advanced communication and teleoperation approaches, which include immersive 3D visualization, and 6D tracking of operator head and arm motions. The proposed system is evaluated in the DARPA Robotics Challenge, the DLR SpaceBot Cup Qualification and lab experiments. We also discuss the lessons learned from the competitions.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
109,374
1504.04122
Detecting Topology Variations in Dynamical Networks
This paper considers the problem of detecting topology variations in dynamical networks. We consider a network whose behavior can be represented via a linear dynamical system. The problem of interest is then that of finding conditions under which it is possible to detect node or link disconnections from prior knowledge of the nominal network behavior and on-line measurements. The considered approach makes use of analysis tools from switching systems theory. A number of results are presented along with examples.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
42,108
2107.11007
Dynamic Proximal Unrolling Network for Compressive Imaging
Compressive imaging aims to recover a latent image from under-sampled measurements, suffering from a serious ill-posed inverse problem. Recently, deep neural networks have been applied to this problem with superior results, owing to the learned advanced image priors. These approaches, however, require training separate models for different imaging modalities and sampling ratios, leading to overfitting to specific settings. In this paper, a dynamic proximal unrolling network (dubbed DPUNet) was proposed, which can handle a variety of measurement matrices via one single model without retraining. Specifically, DPUNet can exploit both the embedded observation model via gradient descent and imposed image priors by learned dynamic proximal operators, achieving joint reconstruction. A key component of DPUNet is a dynamic proximal mapping module, whose parameters can be dynamically adjusted at the inference stage and make it adapt to different imaging settings. Experimental results demonstrate that the proposed DPUNet can effectively handle multiple compressive imaging modalities under varying sampling ratios and noise levels via only one trained model, and outperform the state-of-the-art approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
247,467
2002.11948
Features for Ground Texture Based Localization -- A Survey
Ground texture based vehicle localization using feature-based methods is a promising approach to achieve infrastructure-free high-accuracy localization. In this paper, we provide the first extensive evaluation of available feature extraction methods for this task, using separately taken image pairs as well as synthetic transformations. We identify AKAZE, SURF and CenSurE as best performing keypoint detectors, and find pairings of CenSurE with the ORB, BRIEF and LATCH feature descriptors to achieve greatest success rates for incremental localization, while SIFT stands out when considering severe synthetic transformations as they might occur during absolute localization.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
165,892
2412.08398
Grasp Diffusion Network: Learning Grasp Generators from Partial Point Clouds with Diffusion Models in SO(3)xR3
Grasping objects successfully from a single-view camera is crucial in many robot manipulation tasks. An approach to solve this problem is to leverage simulation to create large datasets of pairs of objects and grasp poses, and then learn a conditional generative model that can be prompted quickly during deployment. However, the grasp pose data is highly multimodal since there are several ways to grasp an object. Hence, in this work, we learn a grasp generative model with diffusion models to sample candidate grasp poses given a partial point cloud of an object. A novel aspect of our method is to consider diffusion in the manifold space of rotations and to propose a collision-avoidance cost guidance to improve the grasp success rate during inference. To accelerate grasp sampling we use recent techniques from the diffusion literature to achieve faster inference times. We show in simulation and real-world experiments that our approach can grasp several objects from raw depth images with $90\%$ success rate and benchmark it against several baselines.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
516,071
2006.14320
Analyzing Effect of Repeated Reading on Oral Fluency and Narrative Production for Computer-Assisted Language Learning
Repeated reading (RR) helps learners, who have little to no experience with reading fluently to gain confidence, speed and process words automatically. The benefits of repeated readings include helping all learners with fact recall, aiding identification of learners' main ideas and vocabulary, increasing comprehension, leading to faster reading as well as increasing word recognition accuracy, and assisting struggling learners as they transition from word-by-word reading to more meaningful phrasing. Thus, RR ultimately helps in improvements of learners' oral fluency and narrative production. However, there are no open audio datasets available on oral responses of learners based on their RR practices. Therefore, in this paper, we present our dataset, discuss its properties, and propose a method to assess oral fluency and narrative production for learners of English using acoustic, prosodic, lexical and syntactical characteristics. The results show that a CALL system can be developed for assessing the improvements in learners' oral fluency and narrative production.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
184,187
2501.10673
Hybrid-Quantum Neural Architecture Search for The Proximal Policy Optimization Algorithm
Recent studies in quantum machine learning advocated the use of hybrid models to assist with the limitations of the currently existing Noisy Intermediate Scale Quantum (NISQ) devices, but what was missing from most of them was the explanations and interpretations of the choices that were made to pick those exact architectures and the differentiation between good and bad hybrid architectures, this research attempts to tackle that gap in the literature by using the Regularized Evolution algorithm to search for the optimal hybrid classical-quantum architecture for the Proximal Policy Optimization (PPO) algorithm, a well-known reinforcement learning algorithm, ultimately the classical models dominated the leaderboard with the best hybrid model coming in eleventh place among all unique models, while we also try to explain the factors that contributed to such results,and for some models to behave better than others in hope to grasp a better intuition about what we should consider good practices for designing an efficient hybrid architecture.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
525,613
2007.05801
Migratable AI: Effect of identity and information migration on users perception of conversational AI agents
Conversational AI agents are proliferating, embodying a range of devices such as smart speakers, smart displays, robots, cars, and more. We can envision a future where a personal conversational agent could migrate across different form factors and environments to always accompany and assist its user to support a far more continuous, personalized, and collaborative experience. This opens the question of what properties of a conversational AI agent migrates across forms, and how it would impact user perception. To explore this, we developed a Migratable AI system where a user's information and/or the agent's identity can be preserved as it migrates across form factors to help its user with a task. We designed a 2x2 between-subjects study to explore the effects of information migration and identity migration on user perceptions of trust, competence, likeability, and social presence. Our results suggest that identity migration had a positive effect on trust, competence, and social presence, while information migration had a positive effect on trust, competence, and likeability. Overall, users report the highest trust, competence, likeability, and social presence towards the conversational agent when both identity and information were migrated across embodiments.
true
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
186,792
2106.08727
AtrialGeneral: Domain Generalization for Left Atrial Segmentation of Multi-Center LGE MRIs
Left atrial (LA) segmentation from late gadolinium enhanced magnetic resonance imaging (LGE MRI) is a crucial step needed for planning the treatment of atrial fibrillation. However, automatic LA segmentation from LGE MRI is still challenging, due to the poor image quality, high variability in LA shapes, and unclear LA boundary. Though deep learning-based methods can provide promising LA segmentation results, they often generalize poorly to unseen domains, such as data from different scanners and/or sites. In this work, we collect 210 LGE MRIs from different centers with different levels of image quality. To evaluate the domain generalization ability of models on the LA segmentation task, we employ four commonly used semantic segmentation networks for the LA segmentation from multi-center LGE MRIs. Besides, we investigate three domain generalization strategies, i.e., histogram matching, mutual information based disentangled representation, and random style transfer, where a simple histogram matching is proved to be most effective.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
241,400
2101.05716
SICKNL: A Dataset for Dutch Natural Language Inference
We present SICK-NL (read: signal), a dataset targeting Natural Language Inference in Dutch. SICK-NL is obtained by translating the SICK dataset of Marelli et al. (2014)from English into Dutch. Having a parallel inference dataset allows us to compare both monolingual and multilingual NLP models for English and Dutch on the two tasks. In the paper, we motivate and detail the translation process, perform a baseline evaluation on both the original SICK dataset and its Dutch incarnation SICK-NL, taking inspiration from Dutch skipgram embeddings and contextualised embedding models. In addition, we encapsulate two phenomena encountered in the translation to formulate stress tests and verify how well the Dutch models capture syntactic restructurings that do not affect semantics. Our main finding is all models perform worse on SICK-NL than on SICK, indicating that the Dutch dataset is more challenging than the English original. Results on the stress tests show that models don't fully capture word order freedom in Dutch, warranting future systematic studies.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
215,508
1709.04747
T${}^2$K${}^2$: The Twitter Top-K Keywords Benchmark
Information retrieval from textual data focuses on the construction of vocabularies that contain weighted term tuples. Such vocabularies can then be exploited by various text analysis algorithms to extract new knowledge, e.g., top-k keywords, top-k documents, etc. Top-k keywords are casually used for various purposes, are often computed on-the-fly, and thus must be efficiently computed. To compare competing weighting schemes and database implementations, benchmarking is customary. To the best of our knowledge, no benchmark currently addresses these problems. Hence, in this paper, we present a top-k keywords benchmark, T${}^2$K${}^2$, which features a real tweet dataset and queries with various complexities and selectivities. T${}^2$K${}^2$ helps evaluate weighting schemes and database implementations in terms of computing performance. To illustrate T${}^2$K${}^2$'s relevance and genericity, we successfully performed tests on the TF-IDF and Okapi BM25 weighting schemes, on one hand, and on different relational (Oracle, PostgreSQL) and document-oriented (MongoDB) database implementations, on the other hand.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
80,723
1506.00839
The Influence of Context on Dialogue Act Recognition
This article presents an analysis of the influence of context information on dialog act recognition. We performed experiments on the widely explored Switchboard corpus, as well as on data annotated according to the recent ISO 24617-2 standard. The latter was obtained from the Tilburg DialogBank and through the mapping of the annotations of a subset of the Let's Go corpus. We used a classification approach based on SVMs, which had proved successful in previous work and allowed us to limit the amount of context information provided. This way, we were able to observe the influence patterns as the amount of context information increased. Our base features consisted of n-grams, punctuation, and wh-words. Context information was obtained from one to five preceding segments and provided either as n-grams or dialog act classifications, with the latter typically leading to better results and more stable influence patterns. In addition to the conclusions about the importance and influence of context information, our experiments on the Switchboard corpus also led to results that advanced the state-of-the-art on the dialog act recognition task on that corpus. Furthermore, the results obtained on data annotated according to the ISO 24617-2 standard define a baseline for future work and contribute for the standardization of experiments in the area.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
43,722
1706.06696
The NAO Backpack: An Open-hardware Add-on for Fast Software Development with the NAO Robot
We present an open-source accessory for the NAO robot, which enables to test computationally demanding algorithms in an external platform while preserving robot's autonomy and mobility. The platform has the form of a backpack, which can be 3D printed and replicated, and holds an ODROID XU4 board to process algorithms externally with ROS compatibility. We provide also a software bridge between the B-Human's framework and ROS to have access to the robot's sensors close to real-time. We tested the platform in several robotics applications such as data logging, visual SLAM, and robot vision with deep learning techniques. The CAD model, hardware specifications and software are available online for the benefit of the community: https://github.com/uchile-robotics/nao-backpack
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
75,722
2211.06130
Physically Consistent Neural ODEs for Learning Multi-Physics Systems
Despite the immense success of neural networks in modeling system dynamics from data, they often remain physics-agnostic black boxes. In the particular case of physical systems, they might consequently make physically inconsistent predictions, which makes them unreliable in practice. In this paper, we leverage the framework of Irreversible port-Hamiltonian Systems (IPHS), which can describe most multi-physics systems, and rely on Neural Ordinary Differential Equations (NODEs) to learn their parameters from data. Since IPHS models are consistent with the first and second principles of thermodynamics by design, so are the proposed Physically Consistent NODEs (PC-NODEs). Furthermore, the NODE training procedure allows us to seamlessly incorporate prior knowledge of the system properties in the learned dynamics. We demonstrate the effectiveness of the proposed method by learning the thermodynamics of a building from the real-world measurements and the dynamics of a simulated gas-piston system. Thanks to the modularity and flexibility of the IPHS framework, PC-NODEs can be extended to learn physically consistent models of multi-physics distributed systems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
329,790
2106.05786
CAT: Cross Attention in Vision Transformer
Since Transformer has found widespread use in NLP, the potential of Transformer in CV has been realized and has inspired many new approaches. However, the computation required for replacing word tokens with image patches for Transformer after the tokenization of the image is vast(e.g., ViT), which bottlenecks model training and inference. In this paper, we propose a new attention mechanism in Transformer termed Cross Attention, which alternates attention inner the image patch instead of the whole image to capture local information and apply attention between image patches which are divided from single-channel feature maps capture global information. Both operations have less computation than standard self-attention in Transformer. By alternately applying attention inner patch and between patches, we implement cross attention to maintain the performance with lower computational cost and build a hierarchical network called Cross Attention Transformer(CAT) for other vision tasks. Our base model achieves state-of-the-arts on ImageNet-1K, and improves the performance of other methods on COCO and ADE20K, illustrating that our network has the potential to serve as general backbones. The code and models are available at \url{https://github.com/linhezheng19/CAT}.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
240,226
2104.10218
Episodic Memory Model for Learning Robotic Manipulation Tasks
Machine learning, artificial intelligence and especially deep learning based approaches are often used to simplify or eliminate the burden of programming industrial robots. Using these approaches robots inherently learn a skill instead of being programmed using strict and tedious programming instructions. While deep learning is effective in making robots learn skills, it does not offer a practical route for teaching a complete task, such as assembly or machine tending, where a complex logic must be understood and related sub-tasks need to be performed. We present a model similar to an episodic memory that allows robots to comprehend sequences of actions using single demonstration and perform them properly and accurately. The algorithm identifies and recognizes the changes in the states of the system and memorizes how to execute the necessary tasks in order to make those changes. This allows the robot to decompose the tasks into smaller sub-tasks, retain the essential steps, and remember how they have been performed.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
231,499
2307.11336
Character Time-series Matching For Robust License Plate Recognition
Automatic License Plate Recognition (ALPR) is becoming a popular study area and is applied in many fields such as transportation or smart city. However, there are still several limitations when applying many current methods to practical problems due to the variation in real-world situations such as light changes, unclear License Plate (LP) characters, and image quality. Almost recent ALPR algorithms process on a single frame, which reduces accuracy in case of worse image quality. This paper presents methods to improve license plate recognition accuracy by tracking the license plate in multiple frames. First, the Adaptive License Plate Rotation algorithm is applied to correctly align the detected license plate. Second, we propose a method called Character Time-series Matching to recognize license plate characters from many consequence frames. The proposed method archives high performance in the UFPR-ALPR dataset which is \boldmath$96.7\%$ accuracy in real-time on RTX A5000 GPU card. We also deploy the algorithm for the Vietnamese ALPR system. The accuracy for license plate detection and character recognition are 0.881 and 0.979 $mAP^{test}$@.5 respectively. The source code is available at https://github.com/chequanghuy/Character-Time-series-Matching.git
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
380,870
2407.08248
Toward accessible comics for blind and low vision readers
This work explores how to fine-tune large language models using prompt engineering techniques with contextual information for generating an accurate text description of the full story, ready to be forwarded to off-the-shelve speech synthesis tools. We propose to use existing computer vision and optical character recognition techniques to build a grounded context from the comic strip image content, such as panels, characters, text, reading order and the association of bubbles and characters. Then we infer character identification and generate comic book script with context-aware panel description including character's appearance, posture, mood, dialogues etc. We believe that such enriched content description can be easily used to produce audiobook and eBook with various voices for characters, captions and playing sound effects.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
472,086
2401.12941
Multicultural Name Recognition For Previously Unseen Names
State of the art Named Entity Recognition (NER) models have achieved an impressive ability to extract common phrases from text that belong to labels such as location, organization, time, and person. However, typical NER systems that rely on having seen a specific entity in their training data in order to label an entity perform poorly on rare or unseen entities ta in order to label an entity perform poorly on rare or unseen entities (Derczynski et al., 2017). This paper attempts to improve recognition of person names, a diverse category that can grow any time someone is born or changes their name. In order for downstream tasks to not exhibit bias based on cultural background, a model should perform well on names from a variety of backgrounds. In this paper I experiment with the training data and input structure of an English Bi-LSTM name recognition model. I look at names from 103 countries to compare how well the model performs on names from different cultures, specifically in the context of a downstream task where extracted names will be matched to information on file. I find that a model with combined character and word input outperforms word-only models and may improve on accuracy compared to classical NER models that are not geared toward identifying unseen entity values.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
423,545
2409.05418
Distributed Optimization with Finite Bit Adaptive Quantization for Efficient Communication and Precision Enhancement
In realistic distributed optimization scenarios, individual nodes possess only partial information and communicate over bandwidth constrained channels. For this reason, the development of efficient distributed algorithms is essential. In our paper we addresses the challenge of unconstrained distributed optimization. In our scenario each node's local function exhibits strong convexity with Lipschitz continuous gradients. The exchange of information between nodes occurs through $3$-bit bandwidth-limited channels (i.e., nodes exchange messages represented by a only $3$-bits). Our proposed algorithm respects the network's bandwidth constraints by leveraging zoom-in and zoom-out operations to adjust quantizer parameters dynamically. We show that during our algorithm's operation nodes are able to converge to the exact optimal solution. Furthermore, we show that our algorithm achieves a linear convergence rate to the optimal solution. We conclude the paper with simulations that highlight our algorithm's unique characteristics.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
486,768
2403.02651
Learning at the Speed of Wireless: Online Real-Time Learning for AI-Enabled MIMO in NextG
Integration of artificial intelligence (AI) and machine learning (ML) into the air interface has been envisioned as a key technology for next-generation (NextG) cellular networks. At the air interface, multiple-input multiple-output (MIMO) and its variants such as multi-user MIMO (MU-MIMO) and massive/full-dimension MIMO have been key enablers across successive generations of cellular networks with evolving complexity and design challenges. Initiating active investigation into leveraging AI/ML tools to address these challenges for MIMO becomes a critical step towards an AI-enabled NextG air interface. At the NextG air interface, the underlying wireless environment will be extremely dynamic with operation adaptations performed on a sub-millisecond basis by MIMO operations such as MU-MIMO scheduling and rank/link adaptation. Given the enormously large number of operation adaptation possibilities, we contend that online real-time AI/ML-based approaches constitute a promising paradigm. To this end, we outline the inherent challenges and offer insights into the design of such online real-time AI/ML-based solutions for MIMO operations. An online real-time AI/ML-based method for MIMO-OFDM channel estimation is then presented, serving as a potential roadmap for developing similar techniques across various MIMO operations in NextG.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
434,888
2402.16607
GVA: Reconstructing Vivid 3D Gaussian Avatars from Monocular Videos
In this paper, we present a novel method that facilitates the creation of vivid 3D Gaussian avatars from monocular video inputs (GVA). Our innovation lies in addressing the intricate challenges of delivering high-fidelity human body reconstructions and aligning 3D Gaussians with human skin surfaces accurately. The key contributions of this paper are twofold. Firstly, we introduce a pose refinement technique to improve hand and foot pose accuracy by aligning normal maps and silhouettes. Precise pose is crucial for correct shape and appearance reconstruction. Secondly, we address the problems of unbalanced aggregation and initialization bias that previously diminished the quality of 3D Gaussian avatars, through a novel surface-guided re-initialization method that ensures accurate alignment of 3D Gaussian points with avatar surfaces. Experimental results demonstrate that our proposed method achieves high-fidelity and vivid 3D Gaussian avatar reconstruction. Extensive experimental analyses validate the performance qualitatively and quantitatively, demonstrating that it achieves state-of-the-art performance in photo-realistic novel view synthesis while offering fine-grained control over the human body and hand pose. Project page: https://3d-aigc.github.io/GVA/.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
432,621
2102.08327
Submodular Maximization subject to a Knapsack Constraint: Combinatorial Algorithms with Near-optimal Adaptive Complexity
Submodular maximization is a classic algorithmic problem with multiple applications in data mining and machine learning; there, the growing need to deal with massive instances motivates the design of algorithms balancing the quality of the solution with applicability. For the latter, an important measure is the adaptive complexity, which captures the number of sequential rounds of parallel computation needed by an algorithm to terminate. In this work we obtain the first constant factor approximation algorithm for non-monotone submodular maximization subject to a knapsack constraint with near-optimal $O(\log n)$ adaptive complexity. Low adaptivity by itself, however, is not enough: a crucial feature to account for is represented by the total number of function evaluations (or value queries). Our algorithm asks $\tilde{O}(n^2)$ value queries, but can be modified to run with only $\tilde{O}(n)$ instead, while retaining a low adaptive complexity of $O(\log^2n)$. Besides the above improvement in adaptivity, this is also the first combinatorial approach with sublinear adaptive complexity for the problem and yields algorithms comparable to the state-of-the-art even for the special cases of cardinality constraints or monotone objectives.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
220,412
1904.01783
Multi-task Learning for Chinese Word Usage Errors Detection
Chinese word usage errors often occur in non-native Chinese learners' writing. It is very helpful for non-native Chinese learners to detect them automatically when learning writing. In this paper, we propose a novel approach, which takes advantages of different auxiliary tasks, such as POS-tagging prediction and word log frequency prediction, to help the task of Chinese word usage error detection. With the help of these auxiliary tasks, we achieve the state-of-the-art results on the performances on the HSK corpus data, without any other extra data.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
126,249
1705.08584
MMD GAN: Towards Deeper Understanding of Moment Matching Network
Generative moment matching network (GMMN) is a deep generative model that differs from Generative Adversarial Network (GAN) by replacing the discriminator in GAN with a two-sample test based on kernel maximum mean discrepancy (MMD). Although some theoretical guarantees of MMD have been studied, the empirical performance of GMMN is still not as competitive as that of GAN on challenging and large benchmark datasets. The computational efficiency of GMMN is also less desirable in comparison with GAN, partially due to its requirement for a rather large batch size during the training. In this paper, we propose to improve both the model expressiveness of GMMN and its computational efficiency by introducing adversarial kernel learning techniques, as the replacement of a fixed Gaussian kernel in the original GMMN. The new approach combines the key ideas in both GMMN and GAN, hence we name it MMD GAN. The new distance measure in MMD GAN is a meaningful loss that enjoys the advantage of weak topology and can be optimized via gradient descent with relatively small batch sizes. In our evaluation on multiple benchmark datasets, including MNIST, CIFAR- 10, CelebA and LSUN, the performance of MMD-GAN significantly outperforms GMMN, and is competitive with other representative GAN works.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
74,048
1510.04209
Finite Uniform Bisimulations for Linear Systems with Finite Input Alphabets
We consider a class of systems over finite alphabets, namely discrete-time systems with linear dynamics and a finite input alphabet. We formulate a notion of finite uniform bisimulation, and motivate and propose a notion of regular finite uniform bisimulation. We derive sufficient conditions for the existence of finite uniform bisimulations, and propose and analyze algorithms to compute finite uniform bisimulations when the sufficient conditions are satisfied. We investigate the necessary conditions, and conclude with a set of illustrative examples.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
47,900
1604.00486
New extremal binary self-dual codes of lengths 64 and 66 from bicubic planar graphs
In this work, connected cubic planar bipartite graphs and related binary self-dual codes are studied. Binary self-dual codes of length 16 are obtained by face-vertex incidence matrices of these graphs. By considering their lifts to the ring R_2 new extremal binary self-dual codes of lengths 64 are constructed as Gray images. More precisely, we construct 15 new codes of length 64. Moreover, 10 new codes of length 66 were obtained by applying a building-up construction to the binary codes. Codes with these weight enumerators are constructed for the first time in the literature. The results are tabulated.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
54,037
2103.12624
Genetic column generation: Fast computation of high-dimensional multi-marginal optimal transport problems
We introduce a simple, accurate, and extremely efficient method for numerically solving the multi-marginal optimal transport (MMOT) problems arising in density functional theory. The method relies on (i) the sparsity of optimal plans [for $N$ marginals discretized by $\ell$ gridpoints each, general Kantorovich plans require $\ell^N$ gridpoints but the support of optimizers is of size $O(\ell\cdot N)$ [FV18]], (ii) the method of column generation (CG) from discrete optimization which to our knowledge has not hitherto been used in MMOT, and (iii) ideas from machine learning. The well-known bottleneck in CG consists in generating new candidate columns efficiently; we prove that in our context, finding the best new column is an NP-complete problem. To overcome this bottleneck we use a genetic learning method tailormade for MMOT in which the dual state within CG plays the role of an "adversary", in loose similarity to Wasserstein GANs. On a sequence of benchmark problems with up to 120 gridpoints and up to 30 marginals, our method always found the exact optimizers. Moreover, empirically the number of computational steps needed to find them appears to scale only polynomially when both $N$ and $\ell$ are simultaneously increased (while keeping their ratio fixed to mimic a thermodynamic limit of the particle system).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
226,241
2009.05147
Practical Cross-modal Manifold Alignment for Grounded Language
We propose a cross-modality manifold alignment procedure that leverages triplet loss to jointly learn consistent, multi-modal embeddings of language-based concepts of real-world items. Our approach learns these embeddings by sampling triples of anchor, positive, and negative data points from RGB-depth images and their natural language descriptions. We show that our approach can benefit from, but does not require, post-processing steps such as Procrustes analysis, in contrast to some of our baselines which require it for reasonable performance. We demonstrate the effectiveness of our approach on two datasets commonly used to develop robotic-based grounded language learning systems, where our approach outperforms four baselines, including a state-of-the-art approach, across five evaluation metrics.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
195,233
2405.09324
Learning Coarse-Grained Dynamics on Graph
We consider a Graph Neural Network (GNN) non-Markovian modeling framework to identify coarse-grained dynamical systems on graphs. Our main idea is to systematically determine the GNN architecture by inspecting how the leading term of the Mori-Zwanzig memory term depends on the coarse-grained interaction coefficients that encode the graph topology. Based on this analysis, we found that the appropriate GNN architecture that will account for $K$-hop dynamical interactions has to employ a Message Passing (MP) mechanism with at least $2K$ steps. We also deduce that the memory length required for an accurate closure model decreases as a function of the interaction strength under the assumption that the interaction strength exhibits a power law that decays as a function of the hop distance. Supporting numerical demonstrations on two examples, a heterogeneous Kuramoto oscillator model and a power system, suggest that the proposed GNN architecture can predict the coarse-grained dynamics under fixed and time-varying graph topologies.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
454,366
2502.14467
Provable Quantum Algorithm Advantage for Gaussian Process Quadrature
The aim of this paper is to develop novel quantum algorithms for Gaussian process quadrature methods. Gaussian process quadratures are numerical integration methods where Gaussian processes are used as functional priors for the integrands to capture the uncertainty arising from the sparse function evaluations. Quantum computers have emerged as potential replacements for classical computers, offering exponential reductions in the computational complexity of machine learning tasks. In this paper, we combine Gaussian process quadratures and quantum computing by proposing a quantum low-rank Gaussian process quadrature method based on a Hilbert space approximation of the Gaussian process kernel and enhancing the quadrature using a quantum circuit. The method combines the quantum phase estimation algorithm with the quantum principal component analysis technique to extract information up to a desired rank. Then, Hadamard and SWAP tests are implemented to find the expected value and variance that determines the quadrature. We use numerical simulations of a quantum computer to demonstrate the effectiveness of the method. Furthermore, we provide a theoretical complexity analysis that shows a polynomial advantage over classical Gaussian process quadrature methods. The code is available at https://github.com/cagalvisf/Quantum_HSGPQ.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
535,844
1910.09679
Sparse Networks with Core-Periphery Structure
We propose a statistical model for graphs with a core-periphery structure. To do this we define a precise notion of what it means for a graph to have this structure, based on the sparsity properties of the subgraphs of core and periphery nodes. We present a class of sparse graphs with such properties, and provide methods to simulate from this class, and to perform posterior inference. We demonstrate that our model can detect core-periphery structure in simulated and real-world networks.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
150,264
2304.05736
Communicating Uncertainty in Machine Learning Explanations: A Visualization Analytics Approach for Predictive Process Monitoring
As data-driven intelligent systems advance, the need for reliable and transparent decision-making mechanisms has become increasingly important. Therefore, it is essential to integrate uncertainty quantification and model explainability approaches to foster trustworthy business and operational process analytics. This study explores how model uncertainty can be effectively communicated in global and local post-hoc explanation approaches, such as Partial Dependence Plots (PDP) and Individual Conditional Expectation (ICE) plots. In addition, this study examines appropriate visualization analytics approaches to facilitate such methodological integration. By combining these two research directions, decision-makers can not only justify the plausibility of explanation-driven actionable insights but also validate their reliability. Finally, the study includes expert interviews to assess the suitability of the proposed approach and designed interface for a real-world predictive process monitoring problem in the manufacturing domain.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
357,730
1907.07315
A General Framework of Learning Multi-Vehicle Interaction Patterns from Videos
Semantic learning and understanding of multi-vehicle interaction patterns in a cluttered driving environment are essential but challenging for autonomous vehicles to make proper decisions. This paper presents a general framework to gain insights into intricate multi-vehicle interaction patterns from bird's-eye view traffic videos. We adopt a Gaussian velocity field to describe the time-varying multi-vehicle interaction behaviors and then use deep autoencoders to learn associated latent representations for each temporal frame. Then, we utilize a hidden semi-Markov model with a hierarchical Dirichlet process as a prior to segment these sequential representations into granular components, also called traffic primitives, corresponding to interaction patterns. Experimental results demonstrate that our proposed framework can extract traffic primitives from videos, thus providing a semantic way to analyze multi-vehicle interaction patterns, even for cluttered driving scenarios that are far messier than human beings can cope with.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
138,841
2008.07725
SoDA: Multi-Object Tracking with Soft Data Association
Robust multi-object tracking (MOT) is a prerequisite fora safe deployment of self-driving cars. Tracking objects, however, remains a highly challenging problem, especially in cluttered autonomous driving scenes in which objects tend to interact with each other in complex ways and frequently get occluded. We propose a novel approach to MOT that uses attention to compute track embeddings that encode the spatiotemporal dependencies between observed objects. This attention measurement encoding allows our model to relax hard data associations, which may lead to unrecoverable errors. Instead, our model aggregates information from all object detections via soft data associations. The resulting latent space representation allows our model to learn to reason about occlusions in a holistic data-driven way and maintain track estimates for objects even when they are occluded. Our experimental results on the Waymo OpenDataset suggest that our approach leverages modern large-scale datasets and performs favorably compared to the state of the art in visual multi-object tracking.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
192,203
1912.12607
Towards Unified INT8 Training for Convolutional Neural Network
Recently low-bit (e.g., 8-bit) network quantization has been extensively studied to accelerate the inference. Besides inference, low-bit training with quantized gradients can further bring more considerable acceleration, since the backward process is often computation-intensive. Unfortunately, the inappropriate quantization of backward propagation usually makes the training unstable and even crash. There lacks a successful unified low-bit training framework that can support diverse networks on various tasks. In this paper, we give an attempt to build a unified 8-bit (INT8) training framework for common convolutional neural networks from the aspects of both accuracy and speed. First, we empirically find the four distinctive characteristics of gradients, which provide us insightful clues for gradient quantization. Then, we theoretically give an in-depth analysis of the convergence bound and derive two principles for stable INT8 training. Finally, we propose two universal techniques, including Direction Sensitive Gradient Clipping that reduces the direction deviation of gradients and Deviation Counteractive Learning Rate Scaling that avoids illegal gradient update along the wrong direction. The experiments show that our unified solution promises accurate and efficient INT8 training for a variety of networks and tasks, including MobileNetV2, InceptionV3 and object detection that prior studies have never succeeded. Moreover, it enjoys a strong flexibility to run on off-the-shelf hardware, and reduces the training time by 22% on Pascal GPU without too much optimization effort. We believe that this pioneering study will help lead the community towards a fully unified INT8 training for convolutional neural networks.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
158,883
2212.13295
Structure-based drug discovery with deep learning
Artificial intelligence (AI) in the form of deep learning bears promise for drug discovery and chemical biology, $\textit{e.g.}$, to predict protein structure and molecular bioactivity, plan organic synthesis, and design molecules $\textit{de novo}$. While most of the deep learning efforts in drug discovery have focused on ligand-based approaches, structure-based drug discovery has the potential to tackle unsolved challenges, such as affinity prediction for unexplored protein targets, binding-mechanism elucidation, and the rationalization of related chemical kinetic properties. Advances in deep learning methodologies and the availability of accurate predictions for protein tertiary structure advocate for a $\textit{renaissance}$ in structure-based approaches for drug discovery guided by AI. This review summarizes the most prominent algorithmic concepts in structure-based deep learning for drug discovery, and forecasts opportunities, applications, and challenges ahead.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
338,263
2411.07231
Watermark Anything with Localized Messages
Image watermarking methods are not tailored to handle small watermarked areas. This restricts applications in real-world scenarios where parts of the image may come from different sources or have been edited. We introduce a deep-learning model for localized image watermarking, dubbed the Watermark Anything Model (WAM). The WAM embedder imperceptibly modifies the input image, while the extractor segments the received image into watermarked and non-watermarked areas and recovers one or several hidden messages from the areas found to be watermarked. The models are jointly trained at low resolution and without perceptual constraints, then post-trained for imperceptibility and multiple watermarks. Experiments show that WAM is competitive with state-of-the art methods in terms of imperceptibility and robustness, especially against inpainting and splicing, even on high-resolution images. Moreover, it offers new capabilities: WAM can locate watermarked areas in spliced images and extract distinct 32-bit messages with less than 1 bit error from multiple small regions - no larger than 10% of the image surface - even for small $256\times 256$ images.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
507,448
2311.06647
Robust Text Classification: Analyzing Prototype-Based Networks
Downstream applications often require text classification models to be accurate and robust. While the accuracy of the state-of-the-art Language Models (LMs) approximates human performance, they often exhibit a drop in performance on noisy data found in the real world. This lack of robustness can be concerning, as even small perturbations in the text, irrelevant to the target task, can cause classifiers to incorrectly change their predictions. A potential solution can be the family of Prototype-Based Networks (PBNs) that classifies examples based on their similarity to prototypical examples of a class (prototypes) and has been shown to be robust to noise for computer vision tasks. In this paper, we study whether the robustness properties of PBNs transfer to text classification tasks under both targeted and static adversarial attack settings. Our results show that PBNs, as a mere architectural variation of vanilla LMs, offer more robustness compared to vanilla LMs under both targeted and static settings. We showcase how PBNs' interpretability can help us to understand PBNs' robustness properties. Finally, our ablation studies reveal the sensitivity of PBNs' robustness to how strictly clustering is done in the training phase, as tighter clustering results in less robust PBNs.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
407,017
2102.04721
Classification of Imbalanced Credit scoring data sets Based on Ensemble Method with the Weighted-Hybrid-Sampling
In the era of big data, the utilization of credit-scoring models to determine the credit risk of applicants accurately becomes a trend in the future. The conventional machine learning on credit scoring data sets tends to have poor classification for the minority class, which may bring huge commercial harm to banks. In order to classify imbalanced data sets, we propose a new ensemble algorithm, namely, Weighted-Hybrid-Sampling-Boost (WHSBoost). In data sampling, we process the imbalanced data sets with weights by the Weighted-SMOTE method and the Weighted-Under-Sampling method, and thus obtain a balanced training sample data set with equal weight. In ensemble algorithm, each time we train the base classifier, the balanced data set is given by the method above. In order to verify the applicability and robustness of the WHSBoost algorithm, we performed experiments on the simulation data sets, real benchmark data sets and real credit scoring data sets, comparing WHSBoost with SMOTE, SMOTEBoost and HSBoost based on SVM, BPNN, DT and KNN.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
219,205