id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1605.00392
Revisiting Human Action Recognition: Personalization vs. Generalization
By thoroughly revisiting the classic human action recognition paradigm, this paper aims at proposing a new approach for the design of effective action classification systems. Taking as testbed publicly available three-dimensional (MoCap) action/activity datasets, we analyzed and validated different training/testing strategies. In particular, considering that each human action in the datasets is performed several times by different subjects, we were able to precisely quantify the effect of inter- and intra-subject variability, so as to figure out the impact of several learning approaches in terms of classification performance. The net result is that standard testing strategies consisting in cross-validating the algorithm using typical splits of the data (holdout, k-fold, or one-subject-out) is always outperformed by a "personalization" strategy which learns how a subject is performing an action. In other words, it is advantageous to customize (i.e., personalize) the method to learn the actions carried out by each subject, rather than trying to generalize the actions executions across subjects. Consequently, we finally propose an action recognition framework consisting of a two-stage classification approach where, given a test action, the subject is first identified before the actual recognition of the action takes place. Despite the basic, off-the-shelf descriptors and standard classifiers adopted, we noted a relevant increase in performance with respect to standard state-of-the-art algorithms, so motivating the usage of personalized approaches for designing effective action recognition systems.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
55,337
2309.11091
Learning Segment Similarity and Alignment in Large-Scale Content Based Video Retrieval
With the explosive growth of web videos in recent years, large-scale Content-Based Video Retrieval (CBVR) becomes increasingly essential in video filtering, recommendation, and copyright protection. Segment-level CBVR (S-CBVR) locates the start and end time of similar segments in finer granularity, which is beneficial for user browsing efficiency and infringement detection especially in long video scenarios. The challenge of S-CBVR task is how to achieve high temporal alignment accuracy with efficient computation and low storage consumption. In this paper, we propose a Segment Similarity and Alignment Network (SSAN) in dealing with the challenge which is firstly trained end-to-end in S-CBVR. SSAN is based on two newly proposed modules in video retrieval: (1) An efficient Self-supervised Keyframe Extraction (SKE) module to reduce redundant frame features, (2) A robust Similarity Pattern Detection (SPD) module for temporal alignment. In comparison with uniform frame extraction, SKE not only saves feature storage and search time, but also introduces comparable accuracy and limited extra computation time. In terms of temporal alignment, SPD localizes similar segments with higher accuracy and efficiency than existing deep learning methods. Furthermore, we jointly train SSAN with SKE and SPD and achieve an end-to-end improvement. Meanwhile, the two key modules SKE and SPD can also be effectively inserted into other video retrieval pipelines and gain considerable performance improvements. Experimental results on public datasets show that SSAN can obtain higher alignment accuracy while saving storage and online query computational cost compared to existing methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
393,276
2410.12676
Identity Emergence in the Context of Vaccine Criticism in France
This study investigates the emergence of collective identity among individuals critical of vaccination policies in France during the COVID-19 pandemic. As concerns grew over mandated health measures, a loose collective formed on Twitter to assert autonomy over vaccination decisions. Using analyses of pronoun usage, outgroup labeling, and tweet similarity, we examine how this identity emerged. A turning point occurred following President Macron's announcement of mandatory vaccination for health workers and the health pass, sparking substantial changes in linguistic patterns. We observed a shift from first-person singular (I) to first-person plural (we) pronouns, alongside an increased focus on vaccinated individuals as a central outgroup, in addition to authority figures. This shift in language patterns was further reflected in the behavior of new users. An analysis of incoming users revealed that a core group of frequent posters played a crucial role in fostering cohesion and shaping norms. New users who joined during the week of Macron's announcement and continued posting afterward showed an increased similarity with the language of the core group, contributing to the crystallization of the emerging collective identity.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
499,137
2005.11077
Driver Identification through Stochastic Multi-State Car-Following Modeling
Intra-driver and inter-driver heterogeneity has been confirmed to exist in human driving behaviors by many studies. In this study, a joint model of the two types of heterogeneity in car-following behavior is proposed as an approach of driver profiling and identification. It is assumed that all drivers share a pool of driver states; under each state a car-following data sequence obeys a specific probability distribution in feature space; each driver has his/her own probability distribution over the states, called driver profile, which characterize the intradriver heterogeneity, while the difference between the driver profile of different drivers depict the inter-driver heterogeneity. Thus, the driver profile can be used to distinguish a driver from others. Based on the assumption, a stochastic car-following model is proposed to take both intra-driver and inter-driver heterogeneity into consideration, and a method is proposed to jointly learn parameters in behavioral feature extractor, driver states and driver profiles. Experiments demonstrate the performance of the proposed method in driver identification on naturalistic car-following data: accuracy of 82.3% is achieved in an 8-driver experiment using 10 car-following sequences of duration 15 seconds for online inference. The potential of fast registration of new drivers are demonstrated and discussed.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
178,376
1612.08034
Push Recovery of a Humanoid Robot Based on Model Predictive Control and Capture Point
The three bio-inspired strategies that have been used for balance recovery of biped robots are the ankle, hip and stepping Strategies. However, there are several cases for a biped robot where stepping is not possible, e. g. when the available contact surfaces are limited. In this situation, the balance recovery by modulating the angular momentum of the upper body (Hip-strategy) or the Zero Moment Point (ZMP) (Ankle strategy) is essential. In this paper, a single Model Predictive Control (MPC) scheme is employed for controlling the Capture Point (CP) to a desired position by modulating both the ZMP and the Centroidal Moment Pivot (CMP). The goal of the proposed controller is to control the CP, employing the CMP when the CP is out of the support polygon, and/or the ZMP when the CP is inside the support polygon. The proposed algorithm is implemented on an abstract model of the SURENA III humanoid robot. Obtained results show the effectiveness of the proposed approach in the presence of severe pushes, even when the support polygon is shrunken to a point or a line.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
66,017
1902.06554
MetaGrasp: Data Efficient Grasping by Affordance Interpreter Network
Data-driven approach for grasping shows significant advance recently. But these approaches usually require much training data. To increase the efficiency of grasping data collection, this paper presents a novel grasp training system including the whole pipeline from data collection to model inference. The system can collect effective grasp sample with a corrective strategy assisted by antipodal grasp rule, and we design an affordance interpreter network to predict pixelwise grasp affordance map. We define graspability, ungraspability and background as grasp affordances. The key advantage of our system is that the pixel-level affordance interpreter network trained with only a small number of grasp samples under antipodal rule can achieve significant performance on totally unseen objects and backgrounds. The training sample is only collected in simulation. Extensive qualitative and quantitative experiments demonstrate the accuracy and robustness of our proposed approach. In the real-world grasp experiments, we achieve a grasp success rate of 93% on a set of household items and 91% on a set of adversarial items with only about 6,300 simulated samples. We also achieve 87% accuracy in clutter scenario. Although the model is trained using only RGB image, when changing the background textures, it also performs well and can achieve even 94% accuracy on the set of adversarial objects, which outperforms current state-of-the-art methods.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
121,789
2406.01906
ProGEO: Generating Prompts through Image-Text Contrastive Learning for Visual Geo-localization
Visual Geo-localization (VG) refers to the process to identify the location described in query images, which is widely applied in robotics field and computer vision tasks, such as autonomous driving, metaverse, augmented reality, and SLAM. In fine-grained images lacking specific text descriptions, directly applying pure visual methods to represent neighborhood features often leads to the model focusing on overly fine-grained features, unable to fully mine the semantic information in the images. Therefore, we propose a two-stage training method to enhance visual performance and use contrastive learning to mine challenging samples. We first leverage the multi-modal description capability of CLIP (Contrastive Language-Image Pretraining) to create a set of learnable text prompts for each geographic image feature to form vague descriptions. Then, by utilizing dynamic text prompts to assist the training of the image encoder, we enable the image encoder to learn better and more generalizable visual features. This strategy of applying text to purely visual tasks addresses the challenge of using multi-modal models for geographic images, which often suffer from a lack of precise descriptions, making them difficult to utilize widely. We validate the effectiveness of the proposed strategy on several large-scale visual geo-localization datasets, and our method achieves competitive results on multiple visual geo-localization datasets. Our code and model are available at https://github.com/Chain-Mao/ProGEO.
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
460,514
2107.02621
Energy Consumption of Deep Generative Audio Models
In most scientific domains, the deep learning community has largely focused on the quality of deep generative models, resulting in highly accurate and successful solutions. However, this race for quality comes at a tremendous computational cost, which incurs vast energy consumption and greenhouse gas emissions. At the heart of this problem are the measures that we use as a scientific community to evaluate our work. In this paper, we suggest relying on a multi-objective measure based on Pareto optimality, which takes into account both the quality of the model and its energy consumption. By applying our measure on the current state-of-the-art in generative audio models, we show that it can drastically change the significance of the results. We believe that this type of metric can be widely used by the community to evaluate their work, while putting computational cost -- and in fine energy consumption -- in the spotlight of deep learning research.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
244,883
2011.04328
Risk Assessment for Machine Learning Models
In this paper we propose a framework for assessing the risk associated with deploying a machine learning model in a specified environment. For that we carry over the risk definition from decision theory to machine learning. We develop and implement a method that allows to define deployment scenarios, test the machine learning model under the conditions specified in each scenario, and estimate the damage associated with the output of the machine learning model under test. Using the likelihood of each scenario together with the estimated damage we define \emph{key risk indicators} of a machine learning model. The definition of scenarios and weighting by their likelihood allows for standardized risk assessment in machine learning throughout multiple domains of application. In particular, in our framework, the robustness of a machine learning model to random input corruptions, distributional shifts caused by a changing environment, and adversarial perturbations can be assessed.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
205,550
2312.07637
Responsibility in Extensive Form Games
Two different forms of responsibility, counterfactual and seeing-to-it, have been extensively discussed in the philosophy and AI in the context of a single agent or multiple agents acting simultaneously. Although the generalisation of counterfactual responsibility to a setting where multiple agents act in some order is relatively straightforward, the same cannot be said about seeing-to-it responsibility. Two versions of seeing-to-it modality applicable to such settings have been proposed in the literature. Neither of them perfectly captures the intuition of responsibility. This paper proposes a definition of seeing-to-it responsibility for such settings that amalgamate the two modalities. This paper shows that the newly proposed notion of responsibility and counterfactual responsibility are not definable through each other and studies the responsibility gap for these two forms of responsibility. It shows that although these two forms of responsibility are not enough to ascribe responsibility in each possible situation, this gap does not exist if higher-order responsibility is taken into account.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
415,008
2408.12153
DimeRec: A Unified Framework for Enhanced Sequential Recommendation via Generative Diffusion Models
Sequential Recommendation (SR) plays a pivotal role in recommender systems by tailoring recommendations to user preferences based on their non-stationary historical interactions. Achieving high-quality performance in SR requires attention to both item representation and diversity. However, designing an SR method that simultaneously optimizes these merits remains a long-standing challenge. In this study, we address this issue by integrating recent generative Diffusion Models (DM) into SR. DM has demonstrated utility in representation learning and diverse image generation. Nevertheless, a straightforward combination of SR and DM leads to sub-optimal performance due to discrepancies in learning objectives (recommendation vs. noise reconstruction) and the respective learning spaces (non-stationary vs. stationary). To overcome this, we propose a novel framework called DimeRec (\textbf{Di}ffusion with \textbf{m}ulti-interest \textbf{e}nhanced \textbf{Rec}ommender). DimeRec synergistically combines a guidance extraction module (GEM) and a generative diffusion aggregation module (DAM). The GEM extracts crucial stationary guidance signals from the user's non-stationary interaction history, while the DAM employs a generative diffusion process conditioned on GEM's outputs to reconstruct and generate consistent recommendations. Our numerical experiments demonstrate that DimeRec significantly outperforms established baseline methods across three publicly available datasets. Furthermore, we have successfully deployed DimeRec on a large-scale short video recommendation platform, serving hundreds of millions of users. Live A/B testing confirms that our method improves both users' time spent and result diversification.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
482,615
2111.09395
FinRL: Deep Reinforcement Learning Framework to Automate Trading in Quantitative Finance
Deep reinforcement learning (DRL) has been envisioned to have a competitive edge in quantitative finance. However, there is a steep development curve for quantitative traders to obtain an agent that automatically positions to win in the market, namely \textit{to decide where to trade, at what price} and \textit{what quantity}, due to the error-prone programming and arduous debugging. In this paper, we present the first open-source framework \textit{FinRL} as a full pipeline to help quantitative traders overcome the steep learning curve. FinRL is featured with simplicity, applicability and extensibility under the key principles, \textit{full-stack framework, customization, reproducibility} and \textit{hands-on tutoring}. Embodied as a three-layer architecture with modular structures, FinRL implements fine-tuned state-of-the-art DRL algorithms and common reward functions, while alleviating the debugging workloads. Thus, we help users pipeline the strategy design at a high turnover rate. At multiple levels of time granularity, FinRL simulates various markets as training environments using historical data and live trading APIs. Being highly extensible, FinRL reserves a set of user-import interfaces and incorporates trading constraints such as market friction, market liquidity and investor's risk-aversion. Moreover, serving as practitioners' stepping stones, typical trading tasks are provided as step-by-step tutorials, e.g., stock trading, portfolio allocation, cryptocurrency trading, etc.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
266,997
2312.10920
Domain adaption and physical constrains transfer learning for shale gas production
Effective prediction of shale gas production is crucial for strategic reservoir development. However, in new shale gas blocks, two main challenges are encountered: (1) the occurrence of negative transfer due to insufficient data, and (2) the limited interpretability of deep learning (DL) models. To tackle these problems, we propose a novel transfer learning methodology that utilizes domain adaptation and physical constraints. This methodology effectively employs historical data from the source domain to reduce negative transfer from the data distribution perspective, while also using physical constraints to build a robust and reliable prediction model that integrates various types of data. The methodology starts by dividing the production data from the source domain into multiple subdomains, thereby enhancing data diversity. It then uses Maximum Mean Discrepancy (MMD) and global average distance measures to decide on the feasibility of transfer. Through domain adaptation, we integrate all transferable knowledge, resulting in a more comprehensive target model. Lastly, by incorporating drilling, completion, and geological data as physical constraints, we develop a hybrid model. This model, a combination of a multi-layer perceptron (MLP) and a Transformer (Transformer-MLP), is designed to maximize interpretability. Experimental validation in China's southwestern region confirms the method's effectiveness.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
416,366
2307.03833
Back to Optimization: Diffusion-based Zero-Shot 3D Human Pose Estimation
Learning-based methods have dominated the 3D human pose estimation (HPE) tasks with significantly better performance in most benchmarks than traditional optimization-based methods. Nonetheless, 3D HPE in the wild is still the biggest challenge for learning-based models, whether with 2D-3D lifting, image-to-3D, or diffusion-based methods, since the trained networks implicitly learn camera intrinsic parameters and domain-based 3D human pose distributions and estimate poses by statistical average. On the other hand, the optimization-based methods estimate results case-by-case, which can predict more diverse and sophisticated human poses in the wild. By combining the advantages of optimization-based and learning-based methods, we propose the \textbf{Ze}ro-shot \textbf{D}iffusion-based \textbf{O}ptimization (\textbf{ZeDO}) pipeline for 3D HPE to solve the problem of cross-domain and in-the-wild 3D HPE. Our multi-hypothesis \textit{\textbf{ZeDO}} achieves state-of-the-art (SOTA) performance on Human3.6M, with minMPJPE $51.4$mm, without training with any 2D-3D or image-3D pairs. Moreover, our single-hypothesis \textit{\textbf{ZeDO}} achieves SOTA performance on 3DPW dataset with PA-MPJPE $40.3$mm on cross-dataset evaluation, which even outperforms learning-based methods trained on 3DPW.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
378,165
2104.00660
Recognizing and Splitting Conditional Sentences for Automation of Business Processes Management
Business Process Management (BPM) is the discipline which is responsible for management of discovering, analyzing, redesigning, monitoring, and controlling business processes. One of the most crucial tasks of BPM is discovering and modelling business processes from text documents. In this paper, we present our system that resolves an end-to-end problem consisting of 1) recognizing conditional sentences from technical documents, 2) finding boundaries to extract conditional and resultant clauses from each conditional sentence, and 3) categorizing resultant clause as Action or Consequence which later helps to generate new steps in our business process model automatically. We created a new dataset and three models solve this problem. Our best model achieved very promising results of 83.82, 87.84, and 85.75 for Precision, Recall, and F1, respectively, for extracting Condition, Action, and Consequence clauses using Exact Match metric.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
228,071
1106.5626
A distributed control strategy for reactive power compensation in smart microgrids
We consider the problem of optimal reactive power compensation for the minimization of power distribution losses in a smart microgrid. We first propose an approximate model for the power distribution network, which allows us to cast the problem into the class of convex quadratic, linearly constrained, optimization problems. We then consider the specific problem of commanding the microgenerators connected to the microgrid, in order to achieve the optimal injection of reactive power. For this task, we design a randomized, gossip-like optimization algorithm. We show how a distributed approach is possible, where microgenerators need to have only a partial knowledge of the problem parameters and of the state, and can perform only local measurements. For the proposed algorithm, we provide conditions for convergence together with an analytic characterization of the convergence speed. The analysis shows that, in radial networks, the best performance can be achieved when we command cooperation among units that are neighbors in the electric topology. Numerical simulations are included to validate the proposed model and to confirm the analytic results about the performance of the proposed algorithm.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
11,046
2312.15346
Learning Multi-Step Manipulation Tasks from A Single Human Demonstration
Learning from human demonstrations has exhibited remarkable achievements in robot manipulation. However, the challenge remains to develop a robot system that matches human capabilities and data efficiency in learning and generalizability, particularly in complex, unstructured real-world scenarios. We propose a system that processes RGBD videos to translate human actions to robot primitives and identifies task-relevant key poses of objects using Grounded Segment Anything. We then address challenges for robots in replicating human actions, considering the human-robot differences in kinematics and collision geometry. To test the effectiveness of our system, we conducted experiments focusing on manual dishwashing. With a single human demonstration recorded in a mockup kitchen, the system achieved 50-100% success for each step and up to a 40% success rate for the whole task with different objects in a home kitchen. Videos are available at https://robot-dishwashing.github.io
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
417,978
2103.00545
Snowy Night-to-Day Translator and Semantic Segmentation Label Similarity for Snow Hazard Indicator
In 2021, Japan recorded more than three times as much snowfall as usual, so road user maybe come across dangerous situation. The poor visibility caused by snow triggers traffic accidents. For example, 2021 January 19, due to the dry snow and the strong wind speed of 27 m / s, blizzards occurred and the outlook has been ineffective. Because of the whiteout phenomenon, multiple accidents with 17 casualties occurred, and 134 vehicles were stacked up for 10 hours over 1 km. At the night time zone, the temperature drops and the road surface tends to freeze. CCTV images on the road surface have the advantage that we enable to monitor the status of major points at the same time. Road managers are required to make decisions on road closures and snow removal work owing to the road surface conditions even at night. In parallel, they would provide road users to alert for hazardous road surfaces. This paper propose a method to automate a snow hazard indicator that the road surface region is generated from the night snow image using the Conditional GAN, pix2pix. In addition, the road surface and the snow covered ROI are predicted using the semantic segmentation DeepLabv3+ with a backbone MobileNet, and the snow hazard indicator to automatically compute how much the night road surface is covered with snow. We demonstrate several results applied to the cold and snow region in the winter of Japan January 19 to 21 2021, and mention the usefulness of high similarity between snowy night-to-day fake output and real snowy day image for night snow visibility.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
222,320
2412.01949
Identifying Key Nodes for the Influence Spread using a Machine Learning Approach
The identification of key nodes in complex networks is an important topic in many network science areas. It is vital to a variety of real-world applications, including viral marketing, epidemic spreading and influence maximization. In recent years, machine learning algorithms have proven to outperform the conventional, centrality-based methods in accuracy and consistency, but this approach still requires further refinement. What information about the influencers can be extracted from the network? How can we precisely obtain the labels required for training? Can these models generalize well? In this paper, we answer these questions by presenting an enhanced machine learning-based framework for the influence spread problem. We focus on identifying key nodes for the Independent Cascade model, which is a popular reference method. Our main contribution is an improved process of obtaining the labels required for training by introducing 'Smart Bins' and proving their advantage over known methods. Next, we show that our methodology allows ML models to not only predict the influence of a given node, but to also determine other characteristics of the spreading process-which is another novelty to the relevant literature. Finally, we extensively test our framework and its ability to generalize beyond complex networks of different types and sizes, gaining important insight into the properties of these methods.
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
false
513,319
2502.14462
Single-image Reflectance and Transmittance Estimation from Any Flatbed Scanner
Flatbed scanners have emerged as promising devices for high-resolution, single-image material capture. However, existing approaches assume very specific conditions, such as uniform diffuse illumination, which are only available in certain high-end devices, hindering their scalability and cost. In contrast, in this work, we introduce a method inspired by intrinsic image decomposition, which accurately removes both shading and specularity, effectively allowing captures with any flatbed scanner. Further, we extend previous work on single-image material reflectance capture with the estimation of opacity and transmittance, critical components of full material appearance (SVBSDF), improving the results for any material captured with a flatbed scanner, at a very high resolution and accuracy
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
true
535,843
1301.3860
Maximum Entropy and the Glasses You Are Looking Through
We give an interpretation of the Maximum Entropy (MaxEnt) Principle in game-theoretic terms. Based on this interpretation, we make a formal distinction between different ways of {em applying/} Maximum Entropy distributions. MaxEnt has frequently been criticized on the grounds that it leads to highly representation dependent results. Our distinction allows us to avoid this problem in many cases.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
21,172
1910.09858
Fixed Pattern Noise Reduction for Infrared Images Based on Cascade Residual Attention CNN
Existing fixed pattern noise reduction (FPNR) methods are easily affected by the motion state of the scene and working condition of the image sensor, which leads to over smooth effects, ghosting artifacts as well as slow convergence rate. To address these issues, we design an innovative cascade convolution neural network (CNN) model with residual skip connections to realize single frame blind FPNR operation without any parameter tuning. Moreover, a coarse-fine convolution (CF-Conv) unit is introduced to extract complementary features in various scales and fuse them to pick more spatial information. Inspired by the success of the visual attention mechanism, we further propose a particular spatial-channel noise attention unit (SCNAU) to separate the scene details from fixed pattern noise more thoroughly and recover the real scene more accurately. Experimental results on test data demonstrate that the proposed cascade CNN-FPNR method outperforms the existing FPNR methods in both of visual effect and quantitative assessment.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
150,322
2008.07707
RTFN: Robust Temporal Feature Network
Time series analysis plays a vital role in various applications, for instance, healthcare, weather prediction, disaster forecast, etc. However, to obtain sufficient shapelets by a feature network is still challenging. To this end, we propose a novel robust temporal feature network (RTFN) that contains temporal feature networks and attentional LSTM networks. The temporal feature networks are built to extract basic features from input data while the attentional LSTM networks are devised to capture complicated shapelets and relationships to enrich features. In experiments, we embed RTFN into supervised structure as a feature extraction network and into unsupervised clustering as an encoder, respectively. The results show that the RTFN-based supervised structure is a winner of 40 out of 85 datasets and the RTFN-based unsupervised clustering performs the best on 4 out of 11 datasets in the UCR2018 archive.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
192,192
1806.09533
Using NLP on news headlines to predict index trends
This paper attempts to provide a state of the art in trend prediction using news headlines. We present the research done on predicting DJIA trends using Natural Language Processing. We will explain the different algorithms we have used as well as the various embedding techniques attempted. We rely on statistical and deep learning models in order to extract information from the corpuses.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
101,371
2408.04369
Analyzing Consumer Reviews for Understanding Drivers of Hotels Ratings: An Indian Perspective
In the internet era, almost every business entity is trying to have its digital footprint in digital media and other social media platforms. For these entities, word of mouse is also very important. Particularly, this is quite crucial for the hospitality sector dealing with hotels, restaurants etc. Consumers do read other consumers reviews before making final decisions. This is where it becomes very important to understand which aspects are affecting most in the minds of the consumers while giving their ratings. The current study focuses on the consumer reviews of Indian hotels to extract aspects important for final ratings. The study involves gathering data using web scraping methods, analyzing the texts using Latent Dirichlet Allocation for topic extraction and sentiment analysis for aspect-specific sentiment mapping. Finally, it incorporates Random Forest to understand the importance of the aspects in predicting the final rating of a user.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
479,361
2309.04422
Video Task Decathlon: Unifying Image and Video Tasks in Autonomous Driving
Performing multiple heterogeneous visual tasks in dynamic scenes is a hallmark of human perception capability. Despite remarkable progress in image and video recognition via representation learning, current research still focuses on designing specialized networks for singular, homogeneous, or simple combination of tasks. We instead explore the construction of a unified model for major image and video recognition tasks in autonomous driving with diverse input and output structures. To enable such an investigation, we design a new challenge, Video Task Decathlon (VTD), which includes ten representative image and video tasks spanning classification, segmentation, localization, and association of objects and pixels. On VTD, we develop our unified network, VTDNet, that uses a single structure and a single set of weights for all ten tasks. VTDNet groups similar tasks and employs task interaction stages to exchange information within and between task groups. Given the impracticality of labeling all tasks on all frames, and the performance degradation associated with joint training of many tasks, we design a Curriculum training, Pseudo-labeling, and Fine-tuning (CPF) scheme to successfully train VTDNet on all tasks and mitigate performance loss. Armed with CPF, VTDNet significantly outperforms its single-task counterparts on most tasks with only 20% overall computations. VTD is a promising new direction for exploring the unification of perception tasks in autonomous driving.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
390,719
2411.15385
Gradient dynamics for low-rank fine-tuning beyond kernels
LoRA has emerged as one of the de facto methods for fine-tuning foundation models with low computational cost and memory footprint. The idea is to only train a low-rank perturbation to the weights of a pre-trained model, given supervised data for a downstream task. Despite its empirical sucess, from a mathematical perspective it remains poorly understood what learning mechanisms ensure that gradient descent converges to useful low-rank perturbations. In this work we study low-rank fine-tuning in a student-teacher setting. We are given the weights of a two-layer base model $f$, as well as i.i.d. samples $(x,f^*(x))$ where $x$ is Gaussian and $f^*$ is the teacher model given by perturbing the weights of $f$ by a rank-1 matrix. This generalizes the setting of generalized linear model (GLM) regression where the weights of $f$ are zero. When the rank-1 perturbation is comparable in norm to the weight matrix of $f$, the training dynamics are nonlinear. Nevertheless, in this regime we prove under mild assumptions that a student model which is initialized at the base model and trained with online gradient descent will converge to the teacher in $dk^{O(1)}$ iterations, where $k$ is the number of neurons in $f$. Importantly, unlike in the GLM setting, the complexity does not depend on fine-grained properties of the activation's Hermite expansion. We also prove that in our setting, learning the teacher model "from scratch'' can require significantly more iterations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
510,585
2303.08600
MSeg3D: Multi-modal 3D Semantic Segmentation for Autonomous Driving
LiDAR and camera are two modalities available for 3D semantic segmentation in autonomous driving. The popular LiDAR-only methods severely suffer from inferior segmentation on small and distant objects due to insufficient laser points, while the robust multi-modal solution is under-explored, where we investigate three crucial inherent difficulties: modality heterogeneity, limited sensor field of view intersection, and multi-modal data augmentation. We propose a multi-modal 3D semantic segmentation model (MSeg3D) with joint intra-modal feature extraction and inter-modal feature fusion to mitigate the modality heterogeneity. The multi-modal fusion in MSeg3D consists of geometry-based feature fusion GF-Phase, cross-modal feature completion, and semantic-based feature fusion SF-Phase on all visible points. The multi-modal data augmentation is reinvigorated by applying asymmetric transformations on LiDAR point cloud and multi-camera images individually, which benefits the model training with diversified augmentation transformations. MSeg3D achieves state-of-the-art results on nuScenes, Waymo, and SemanticKITTI datasets. Under the malfunctioning multi-camera input and the multi-frame point clouds input, MSeg3D still shows robustness and improves the LiDAR-only baseline. Our code is publicly available at \url{https://github.com/jialeli1/lidarseg3d}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
351,707
2305.06289
Learning Video-Conditioned Policies for Unseen Manipulation Tasks
The ability to specify robot commands by a non-expert user is critical for building generalist agents capable of solving a large variety of tasks. One convenient way to specify the intended robot goal is by a video of a person demonstrating the target task. While prior work typically aims to imitate human demonstrations performed in robot environments, here we focus on a more realistic and challenging setup with demonstrations recorded in natural and diverse human environments. We propose Video-conditioned Policy learning (ViP), a data-driven approach that maps human demonstrations of previously unseen tasks to robot manipulation skills. To this end, we learn our policy to generate appropriate actions given current scene observations and a video of the target task. To encourage generalization to new tasks, we avoid particular tasks during training and learn our policy from unlabelled robot trajectories and corresponding robot videos. Both robot and human videos in our framework are represented by video embeddings pre-trained for human action recognition. At test time we first translate human videos to robot videos in the common video embedding space, and then use resulting embeddings to condition our policies. Notably, our approach enables robot control by human demonstrations in a zero-shot manner, i.e., without using robot trajectories paired with human instructions during training. We validate our approach on a set of challenging multi-task robot manipulation environments and outperform state of the art. Our method also demonstrates excellent performance in a new challenging zero-shot setup where no paired data is used during training.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
363,476
2205.10805
Deep Learning-Based Synchronization for Uplink NB-IoT
We propose a neural network (NN)-based algorithm for device detection and time of arrival (ToA) and carrier frequency offset (CFO) estimation for the narrowband physical random-access channel (NPRACH) of narrowband internet of things (NB-IoT). The introduced NN architecture leverages residual convolutional networks as well as knowledge of the preamble structure of the 5G New Radio (5G NR) specifications. Benchmarking on a 3rd Generation Partnership Project (3GPP) urban microcell (UMi) channel model with random drops of users against a state-of-the-art baseline shows that the proposed method enables up to 8 dB gains in false negative rate (FNR) as well as significant gains in false positive rate (FPR) and ToA and CFO estimation accuracy. Moreover, our simulations indicate that the proposed algorithm enables gains over a wide range of channel conditions, CFOs, and transmission probabilities. The introduced synchronization method operates at the base station (BS) and, therefore, introduces no additional complexity on the user devices. It could lead to an extension of battery lifetime by reducing the preamble length or the transmit power. Our code is available at: https://github.com/NVlabs/nprach_synch/.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
297,881
2405.03667
Fault Detection and Monitoring using a Data-Driven Information-Based Strategy: Method, Theory, and Application
The ability to detect when a system undergoes an incipient fault is of paramount importance in preventing a critical failure. Classic methods for fault detection (including model-based and data-driven approaches) rely on thresholding error statistics or simple input-residual dependencies but face difficulties with non-linear or non-Gaussian systems. Behavioral methods (e.g., those relying on digital twins) address these difficulties but still face challenges when faulty data is scarce, decision guarantees are required, or working with already-deployed models is required. In this work, we propose an information-driven fault detection method based on a novel concept drift detector, addressing these challenges. The method is tailored to identifying drifts in input-output relationships of additive noise models (i.e., model drifts) and is based on a distribution-free mutual information (MI) estimator. Our scheme does not require prior faulty examples and can be applied distribution-free over a large class of system models. Our core contributions are twofold. First, we demonstrate the connection between fault detection, model drift detection, and testing independence between two random variables. Second, we prove several theoretical properties of the proposed MI-based fault detection scheme: (i) strong consistency, (ii) exponentially fast detection of the non-faulty case, and (iii) control of both significance levels and power of the test. To conclude, we validate our theory with synthetic data and the benchmark dataset N-CMAPSS of aircraft turbofan engines. These empirical results support the usefulness of our methodology in many practical and realistic settings, and the theoretical results show performance guarantees that other methods cannot offer.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
452,268
2103.05939
A Review and Refinement of Surprise Adequacy
Surprise Adequacy (SA) is one of the emerging and most promising adequacy criteria for Deep Learning (DL) testing. As an adequacy criterion, it has been used to assess the strength of DL test suites. In addition, it has also been used to find inputs to a Deep Neural Network (DNN) which were not sufficiently represented in the training data, or to select samples for DNN retraining. However, computation of the SA metric for a test suite can be prohibitively expensive, as it involves a quadratic number of distance calculations. Hence, we developed and released a performance-optimized, but functionally equivalent, implementation of SA, reducing the evaluation time by up to 97\%. We also propose refined variants of the SA omputation algorithm, aiming to further increase the evaluation speed. We then performed an empirical study on MNIST, focused on the out-of-distribution detection capabilities of SA, which allowed us to reproduce parts of the results presented when SA was first released. The experiments show that our refined variants are substantially faster than plain SA, while producing comparable outcomes. Our experimental results exposed also an overlooked issue of SA: it can be highly sensitive to the non-determinism associated with the DNN training procedure.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
224,137
2212.07172
Quotations, Coreference Resolution, and Sentiment Annotations in Croatian News Articles: An Exploratory Study
This paper presents a corpus annotated for the task of direct-speech extraction in Croatian. The paper focuses on the annotation of the quotation, co-reference resolution, and sentiment annotation in SETimes news corpus in Croatian and on the analysis of its language-specific differences compared to English. From this, a list of the phenomena that require special attention when performing these annotations is derived. The generated corpus with quotation features annotations can be used for multiple tasks in the field of Natural Language Processing.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
336,326
2005.09336
A systematic comparison of grapheme-based vs. phoneme-based label units for encoder-decoder-attention models
Following the rationale of end-to-end modeling, CTC, RNN-T or encoder-decoder-attention models for automatic speech recognition (ASR) use graphemes or grapheme-based subword units based on e.g. byte-pair encoding (BPE). The mapping from pronunciation to spelling is learned completely from data. In contrast to this, classical approaches to ASR employ secondary knowledge sources in the form of phoneme lists to define phonetic output labels and pronunciation lexica. In this work, we do a systematic comparison between grapheme- and phoneme-based output labels for an encoder-decoder-attention ASR model. We investigate the use of single phonemes as well as BPE-based phoneme groups as output labels of our model. To preserve a simplified and efficient decoder design, we also extend the phoneme set by auxiliary units to be able to distinguish homophones. Experiments performed on the Switchboard 300h and LibriSpeech benchmarks show that phoneme-based modeling is competitive to grapheme-based encoder-decoder-attention modeling.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
177,900
0908.1597
A quantum diffusion network
Wong's diffusion network is a stochastic, zero-input Hopfield network with a Gibbs stationary distribution over a bounded, connected continuum. Previously, logarithmic thermal annealing was demonstrated for the diffusion network and digital versions of it were studied and applied to imaging. Recently, "quantum" annealed Markov chains have garnered significant attention because of their improved performance over "pure" thermal annealing. In this note, a joint quantum and thermal version of Wong's diffusion network is described and its convergence properties are studied. Different choices for "auxiliary" functions are discussed, including those of the kinetic type previously associated with quantum annealing.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
4,263
1312.6808
Socially-Aware Venue Recommendation for Conference Participants
Current research environments are witnessing high enormities of presentations occurring in different sessions at academic conferences. This situation makes it difficult for researchers (especially juniors) to attend the right presentation session(s) for effective collaboration. In this paper, we propose an innovative venue recommendation algorithm to enhance smart conference participation. Our proposed algorithm, Social Aware Recommendation of Venues and Environments (SARVE), computes the Pearson Correlation and social characteristic information of conference participants. SARVE further incorporates the current context of both the smart conference community and participants in order to model a recommendation process using distributed community detection. Through the integration of the above computations and techniques, we are able to recommend presentation sessions of active participant presenters that may be of high interest to a particular participant. We evaluate SARVE using a real world dataset. Our experimental results demonstrate that SARVE outperforms other state-of-the-art methods.
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
29,402
2407.16923
Handling Device Heterogeneity for Deep Learning-based Localization
Deep learning-based fingerprinting is one of the current promising technologies for outdoor localization in cellular networks. However, deploying such localization systems for heterogeneous phones affects their accuracy as the cellular received signal strength (RSS) readings vary for different types of phones. In this paper, we introduce a number of techniques for addressing the phones heterogeneity problem in the deep-learning based localization systems. The basic idea is either to approximate a function that maps the cellular RSS measurements between different devices or to transfer the knowledge across them. Evaluation of the proposed techniques using different Android phones on four independent testbeds shows that our techniques can improve the localization accuracy by more than 220% for the four testbeds as compared to the state-of-the-art systems. This highlights the promise of the proposed device heterogeneity handling techniques for enabling a wide deployment of deep learning-based localization systems over different devices.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
475,781
2410.13389
Dynamic Input Mapping Inversion for Algebraic Loop-Free Control in Hydraulic Actuators
The application of nonlinear control schemes to electro-hydraulic actuators often requires several alterations in the design of the controllers during their implementation. This is to overcome the challenges that frequently arise from the inherent complexity of such control algorithms owning to model nonlinearities. Moreover, advanced control solutions for this type of systems often introduce input algebraic loops and chatter, which considerably degrade the tracking performance. This study presents a nonlinear control architecture for hydraulic actuators that comprises low-complexity modules, based on well-established designs that facilitate robust high performance in tracking without introducing the aforementioned limitations. Specifically, the proposed solution consists of two variants of a position controller for the hydraulic cylinder and a dynamic input-mapping inversion module to avoid algebraic loops in the control input. The stability of the closed-loop system is analysed using arguments from Lyapunov theory for cascaded non-autonomous nonlinear systems. The effectiveness of the proposed solution is evaluated on a high-fidelity simulator of a wind turbine pitch system. Appropriate quantitative metrics are finally defined to evaluate the closed-loop system performance in comparison to state-of-the-art nonlinear design.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
499,512
1601.06108
Decision Aids for Adversarial Planning in Military Operations: Algorithms, Tools, and Turing-test-like Experimental Validation
Use of intelligent decision aids can help alleviate the challenges of planning complex operations. We describe integrated algorithms, and a tool capable of translating a high-level concept for a tactical military operation into a fully detailed, actionable plan, producing automatically (or with human guidance) plans with realistic degree of detail and of human-like quality. Tight interleaving of several algorithms -- planning, adversary estimates, scheduling, routing, attrition and consumption estimates -- comprise the computational approach of this tool. Although originally developed for Army large-unit operations, the technology is generic and also applies to a number of other domains, particularly in critical situations requiring detailed planning within a constrained period of time. In this paper, we focus particularly on the engineering tradeoffs in the design of the tool. In an experimental evaluation, reminiscent of the Turing test, the tool's performance compared favorably with human planners.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
51,226
1503.01250
A new method on deterministic construction of the measurement matrix in compressed sensing
Construction on the measurement matrix $A$ is a central problem in compressed sensing. Although using random matrices is proven optimal and successful in both theory and applications. A deterministic construction on the measurement matrix is still very important and interesting. In fact, it is still an open problem proposed by T. Tao. In this paper, we shall provide a new deterministic construction method and prove it is optimal with regard to the mutual incoherence.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
40,809
2501.11406
Efficient Reduction of Interconnected Subsystem Models using Abstracted Environments
We present two frameworks for structure-preserving model order reduction of interconnected subsystems, improving tractability of the reduction methods while ensuring stability and accuracy bounds of the reduced interconnected model. Instead of reducing each subsystem independently, we take a low-order abstraction of its environment into account to better capture the dynamics relevant to the external input-output behaviour of the interconnected system, thereby increasing accuracy of the reduced interconnected model. This approach significantly reduces the computational costs of reduction by abstracting instead of fully retaining the environment. The two frameworks differ in how they generate these abstracted environments: one abstracts the environment as a whole, whereas the other abstracts each individual subsystem. By relating low-level errors introduced by reduction and abstraction to the resulting high-level error on the interconnected system, we are able to translate high-level accuracy requirements (on the reduced interconnected system) to low-level specifications (on abstraction and reduction errors) using techniques from robust performance analysis. By adhering to these low-level specifications, restricting the introduced low-level errors, both frameworks automatically guarantee the accuracy and stability of the reduced interconnected system. We demonstrate the effectiveness of both frameworks by applying them to a structural dynamics model of a two-stroke wafer stage, achieving improved accuracy and/or greater reduction compared to an existing method from literature.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
525,910
2104.01854
Integrating 2D and 3D Digital Plant Information Towards Automatic Generation of Digital Twins
Ongoing standardization in Industry 4.0 supports tool vendor neutral representations of Piping and Instrumentation diagrams as well as 3D pipe routing. However, a complete digital plant model requires combining these two representations. 3D pipe routing information is essential for building any accurate first-principles process simulation model. Piping and instrumentation diagrams are the primary source for control loops. In order to automatically integrate these information sources to a unified digital plant model, it is necessary to develop algorithms for identifying corresponding elements such as tanks and pumps from piping and instrumentation diagrams and 3D CAD models. One approach is to raise these two information sources to a common level of abstraction and to match them at this level of abstraction. Graph matching is a potential technique for this purpose. This article focuses on automatic generation of the graphs as a prerequisite to graph matching. Algorithms for this purpose are proposed and validated with a case study. The paper concludes with a discussion of further research needed to reprocess the generated graphs in order to enable effective matching.
false
false
false
false
true
false
false
false
false
true
true
true
false
false
false
false
false
true
228,504
1912.07959
Multi-focus Image Fusion Based on Similarity Characteristics
A novel multi-focus image fusion algorithm performed in spatial domain based on similarity characteristics is proposed incorporating with region segmentation. In this paper, a new similarity measure is developed based on the structural similarity (SSIM) index, which is more suitable for multi-focus image segmentation. Firstly, the SSNSIM map is calculated between two input images. Then we segment the SSNSIM map using watershed method, and merge the small homogeneous regions with fuzzy c-means clustering algorithm (FCM). For three source images, a joint region segmentation method based on segmentation of two images is used to obtain the final segmentation result. Finally, the corresponding segmented regions of the source images are fused according to their average gradient. The performance of the image fusion method is evaluated by several criteria including spatial frequency, average gradient, entropy, edge retention etc. The evaluation results indicate that the proposed method is effective and has good visual perception.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
157,729
2007.11086
Converse Barrier Functions via Lyapunov Functions
We prove a robust converse barrier function theorem via the converse Lyapunov theory. While the use of a Lyapunov function as a barrier function is straightforward, the existence of a converse Lyapunov function as a barrier function for a given safety set is not. We establish this link by a robustness argument. We show that the closure of the forward reachable set of a robustly safe set must be robustly asymptotically stable under mild technical assumptions. As a result, all robustly safe dynamical systems must admit a robust barrier function in the form of a Lyapunov function for set stability. We present the results in both continuous-time and discrete-time settings and remark on connections with various barrier function conditions.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
188,456
2402.13916
Bias correction of wind power forecasts with SCADA data and continuous learning
Wind energy plays a critical role in the transition towards renewable energy sources. However, the uncertainty and variability of wind can impede its full potential and the necessary growth of wind power capacity. To mitigate these challenges, wind power forecasting methods are employed for applications in power management, energy trading, or maintenance scheduling. In this work, we present, evaluate, and compare four machine learning-based wind power forecasting models. Our models correct and improve 48-hour forecasts extracted from a numerical weather prediction (NWP) model. The models are evaluated on datasets from a wind park comprising 65 wind turbines. The best improvement in forecasting error and mean bias was achieved by a convolutional neural network, reducing the average NRMSE down to 22%, coupled with a significant reduction in mean bias, compared to a NRMSE of 35% from the strongly biased baseline model using uncorrected NWP forecasts. Our findings further indicate that changes to neural network architectures play a minor role in affecting the forecasting performance, and that future research should rather investigate changes in the model pipeline. Moreover, we introduce a continuous learning strategy, which is shown to achieve the highest forecasting performance improvements when new data is made available.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
431,459
2409.01628
CTG-KrEW: Generating Synthetic Structured Contextually Correlated Content by Conditional Tabular GAN with K-Means Clustering and Efficient Word Embedding
Conditional Tabular Generative Adversarial Networks (CTGAN) and their various derivatives are attractive for their ability to efficiently and flexibly create synthetic tabular data, showcasing strong performance and adaptability. However, there are certain critical limitations to such models. The first is their inability to preserve the semantic integrity of contextually correlated words or phrases. For instance, skillset in freelancer profiles is one such attribute where individual skills are semantically interconnected and indicative of specific domain interests or qualifications. The second challenge of traditional approaches is that, when applied to generate contextually correlated tabular content, besides generating semantically shallow content, they consume huge memory resources and CPU time during the training stage. To address these problems, we introduce a novel framework, CTGKrEW (Conditional Tabular GAN with KMeans Clustering and Word Embedding), which is adept at generating realistic synthetic tabular data where attributes are collections of semantically and contextually coherent words. CTGKrEW is trained and evaluated using a dataset from Upwork, a realworld freelancing platform. Comprehensive experiments were conducted to analyze the variability, contextual similarity, frequency distribution, and associativity of the generated data, along with testing the framework's system feasibility. CTGKrEW also takes around 99\% less CPU time and 33\% less memory footprints than the conventional approach. Furthermore, we developed KrEW, a web application to facilitate the generation of realistic data containing skill-related information. This application, available at https://riyasamanta.github.io/krew.html, is freely accessible to both the general public and the research community.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
485,417
2210.06720
LIME: Weakly-Supervised Text Classification Without Seeds
In weakly-supervised text classification, only label names act as sources of supervision. Predominant approaches to weakly-supervised text classification utilize a two-phase framework, where test samples are first assigned pseudo-labels and are then used to train a neural text classifier. In most previous work, the pseudo-labeling step is dependent on obtaining seed words that best capture the relevance of each class label. We present LIME, a framework for weakly-supervised text classification that entirely replaces the brittle seed-word generation process with entailment-based pseudo-classification. We find that combining weakly-supervised classification and textual entailment mitigates shortcomings of both, resulting in a more streamlined and effective classification pipeline. With just an off-the-shelf textual entailment model, LIME outperforms recent baselines in weakly-supervised text classification and achieves state-of-the-art in 4 benchmarks. We open source our code at https://github.com/seongminp/LIME.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
323,412
2412.13395
Enhancing Talk Moves Analysis in Mathematics Tutoring through Classroom Teaching Discourse
Human tutoring interventions play a crucial role in supporting student learning, improving academic performance, and promoting personal growth. This paper focuses on analyzing mathematics tutoring discourse using talk moves - a framework of dialogue acts grounded in Accountable Talk theory. However, scaling the collection, annotation, and analysis of extensive tutoring dialogues to develop machine learning models is a challenging and resource-intensive task. To address this, we present SAGA22, a compact dataset, and explore various modeling strategies, including dialogue context, speaker information, pretraining datasets, and further fine-tuning. By leveraging existing datasets and models designed for classroom teaching, our results demonstrate that supplementary pretraining on classroom data enhances model performance in tutoring settings, particularly when incorporating longer context and speaker information. Additionally, we conduct extensive ablation studies to underscore the challenges in talk move modeling.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
518,271
2209.08807
A Deep Learning Approach for Parallel Imaging and Compressed Sensing MRI Reconstruction
Parallel imaging accelerates MRI data acquisition by acquiring additional sensitivity information with an array of receiver coils, resulting in fewer phase encoding steps. Because of fewer data requirements than parallel imaging, compressed sensing magnetic resonance imaging (CS-MRI) has gained popularity in the field of medical imaging. Parallel imaging and compressed sensing (CS) both reduce the amount of data captured in the k-space, which speeds up traditional MRI acquisition. As acquisition time is inversely proportional to sample count, forming an image from reduced k-space samples results in faster acquisition but with aliasing artifacts. For de-aliasing the reconstructed image, this paper proposes a novel Generative Adversarial Network (GAN) called RECGAN-GR that is supervised with multi-modal losses. In comparison to existing GAN networks, our proposed method introduces a novel generator network, RemU-Net, which is integrated with dual-domain loss functions such as weighted magnitude and phase loss functions, as well as parallel imaging-based loss, GRAPPA consistency loss. As refinement learning, a k-space correction block is proposed to make the GAN network self-resistant to generating unnecessary data, which speeds up the reconstruction process. Comprehensive results show that the proposed RECGAN-GR not only improves the PSNR by 4 dB over GAN-based methods but also by 2 dB over conventional state-of-the-art CNN methods available in the literature for single-coil data. The proposed work significantly improves image quality for low-retained data, resulting in five to ten times faster acquisition.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
318,277
2302.02881
Enhancing Human-Robot Collaboration Transportation through Obstacle-Aware Vibrotactile Feedback
Transporting large and heavy objects can benefit from Human-Robot Collaboration (HRC), increasing the contribution of robots to our daily tasks and reducing the risk of injuries to the human operator. This approach usually posits the human collaborator as the leader, while the robot has the follower role. Hence, it is essential for the leader to be aware of the environmental situation. However, when transporting a large object, the operator's situational awareness can be compromised as the object may occlude different parts of the environment. This paper proposes a novel haptic-based environmental awareness module for a collaborative transportation framework that informs the human operator about surrounding obstacles. The robot uses two LIDARs to detect the obstacles in the surroundings. The warning module alerts the operator through a haptic belt with four vibrotactile devices that provide feedback about the location and proximity of the obstacles. By enhancing the operator's awareness of the surroundings, the proposed module improves the safety of the human-robot team in co-carrying scenarios by preventing collisions. Experiments with two non-expert subjects in two different situations are conducted. The results show that the human partner can successfully lead the co-transportation system in an unknown environment with hidden obstacles thanks to the haptic feedback.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
344,138
2212.08235
A Simple Decentralized Cross-Entropy Method
Cross-Entropy Method (CEM) is commonly used for planning in model-based reinforcement learning (MBRL) where a centralized approach is typically utilized to update the sampling distribution based on only the top-$k$ operation's results on samples. In this paper, we show that such a centralized approach makes CEM vulnerable to local optima, thus impairing its sample efficiency. To tackle this issue, we propose Decentralized CEM (DecentCEM), a simple but effective improvement over classical CEM, by using an ensemble of CEM instances running independently from one another, and each performing a local improvement of its own sampling distribution. We provide both theoretical and empirical analysis to demonstrate the effectiveness of this simple decentralized approach. We empirically show that, compared to the classical centralized approach using either a single or even a mixture of Gaussian distributions, our DecentCEM finds the global optimum much more consistently thus improves the sample efficiency. Furthermore, we plug in our DecentCEM in the planning problem of MBRL, and evaluate our approach in several continuous control environments, with comparison to the state-of-art CEM based MBRL approaches (PETS and POPLIN). Results show sample efficiency improvement by simply replacing the classical CEM module with our DecentCEM module, while only sacrificing a reasonable amount of computational cost. Lastly, we conduct ablation studies for more in-depth analysis. Code is available at https://github.com/vincentzhang/decentCEM
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
336,676
2308.15863
Inductive Learning of Declarative Domain-Specific Heuristics for ASP
Domain-specific heuristics are a crucial technique for the efficient solving of problems that are large or computationally hard. Answer Set Programming (ASP) systems support declarative specifications of domain-specific heuristics to improve solving performance. However, such heuristics must be invented manually so far. Inventing domain-specific heuristics for answer-set programs requires expertise with the domain under consideration and familiarity with ASP syntax, semantics, and solving technology. The process of inventing useful heuristics would highly profit from automatic support. This paper presents a novel approach to the automatic learning of such heuristics. We use Inductive Logic Programming (ILP) to learn declarative domain-specific heuristics from examples stemming from (near-)optimal answer sets of small but representative problem instances. Our experimental results indicate that the learned heuristics can improve solving performance and solution quality when solving larger, harder instances of the same problem.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
388,822
2109.10450
Towards cyber-physical systems robust to communication delays: A differential game approach
Collaboration between interconnected cyber-physical systems is becoming increasingly pervasive. Time-delays in communication channels between such systems are known to induce catastrophic failure modes, like high frequency oscillations in robotic manipulators in bilateral teleoperation or string instability in platoons of autonomous vehicles. This paper considers nonlinear time-delay systems representing coupled robotic agents, and proposes controllers that are robust to time-varying communication delays. We introduce approximations that allow the delays to be considered as implicit control inputs themselves, and formulate the problem as a zero-sum differential game between the stabilizing controllers and the delays acting adversarially. The ensuing optimal control law is finally compared to known results from Lyapunov-Krasovskii based approaches via numerical experiments.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
256,609
2011.11890
Cross-Camera Convolutional Color Constancy
We present "Cross-Camera Convolutional Color Constancy" (C5), a learning-based method, trained on images from multiple cameras, that accurately estimates a scene's illuminant color from raw images captured by a new camera previously unseen during training. C5 is a hypernetwork-like extension of the convolutional color constancy (CCC) approach: C5 learns to generate the weights of a CCC model that is then evaluated on the input image, with the CCC weights dynamically adapted to different input content. Unlike prior cross-camera color constancy models, which are usually designed to be agnostic to the spectral properties of test-set images from unobserved cameras, C5 approaches this problem through the lens of transductive inference: additional unlabeled images are provided as input to the model at test time, which allows the model to calibrate itself to the spectral properties of the test-set camera during inference. C5 achieves state-of-the-art accuracy for cross-camera color constancy on several datasets, is fast to evaluate (~7 and ~90 ms per image on a GPU or CPU, respectively), and requires little memory (~2 MB), and thus is a practical solution to the problem of calibration-free automatic white balance for mobile photography.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
207,980
2106.00161
Integrative Use of Computer Vision and Unmanned Aircraft Technologies in Public Inspection: Foreign Object Debris Image Collection
Unmanned Aircraft Systems (UAS) have become an important resource for public service providers and smart cities. The purpose of this study is to expand this research area by integrating computer vision and UAS technology to automate public inspection. As an initial case study for this work, a dataset of common foreign object debris (FOD) is developed to assess the potential of light-weight automated detection. This paper presents the rationale and creation of this dataset. Future iterations of our work will include further technical details analyzing experimental implementation. At a local airport, UAS and portable cameras are used to collect the data contained in the initial version of this dataset. After collecting these videos of FOD, they were split into individual frames and stored as several thousand images. These frames are then annotated following standard computer vision format and stored in a folder-structure that reflects our creation method. The dataset annotations are validated using a custom tool that could be abstracted to fit future applications. Initial detection models were successfully created using the famous You Only Look Once algorithm, which indicates the practicality of the proposed data. Finally, several potential scenarios that could utilize either this dataset or similar methods for other public service are presented.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
238,009
1812.07079
Rethinking Epistemic Logic with Belief Bases
We introduce a new semantics for a logic of explicit and implicit beliefs based on the concept of multi-agent belief base. Differently from existing Kripke-style semantics for epistemic logic in which the notions of possible world and doxastic/epistemic alternative are primitive, in our semantics they are non-primitive but are defined from the concept of belief base. We provide a complete axiomatization and prove decidability for our logic via a finite model argument. We also provide a polynomial embedding of our logic into Fagin & Halpern's logic of general awareness and establish a complexity result for our logic via the embedding.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
116,739
2208.09793
FastCPH: Efficient Survival Analysis for Neural Networks
The Cox proportional hazards model is a canonical method in survival analysis for prediction of the life expectancy of a patient given clinical or genetic covariates -- it is a linear model in its original form. In recent years, several methods have been proposed to generalize the Cox model to neural networks, but none of these are both numerically correct and computationally efficient. We propose FastCPH, a new method that runs in linear time and supports both the standard Breslow and Efron methods for tied events. We also demonstrate the performance of FastCPH combined with LassoNet, a neural network that provides interpretability through feature sparsity, on survival datasets. The final procedure is efficient, selects useful covariates and outperforms existing CoxPH approaches.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
313,837
1910.04388
First Order Ambisonics Domain Spatial Augmentation for DNN-based Direction of Arrival Estimation
In this paper, we propose a novel data augmentation method for training neural networks for Direction of Arrival (DOA) estimation. This method focuses on expanding the representation of the DOA subspace of a dataset. Given some input data, it applies a transformation to it in order to change its DOA information and simulate new potentially unseen one. Such transformation, in general, is a combination of a rotation and a reflection. It is possible to apply such transformation due to a well-known property of First Order Ambisonics (FOA). The same transformation is applied also to the labels, in order to maintain consistency between input data and target labels. Three methods with different level of generality are proposed for applying this augmentation principle. Experiments are conducted on two different DOA networks. Results of both experiments demonstrate the effectiveness of the novel augmentation strategy by improving the DOA error by around 40%.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
148,757
1304.2743
Comparisons of Reasoning Mechanisms for Computer Vision
An evidential reasoning mechanism based on the Dempster-Shafer theory of evidence is introduced. Its performance in real-world image analysis is compared with other mechanisms based on the Bayesian formalism and a simple weight combination method.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
23,752
2210.05582
Digital Twin-Based Multiple Access Optimization and Monitoring via Model-Driven Bayesian Learning
Commonly adopted in the manufacturing and aerospace sectors, digital twin (DT) platforms are increasingly seen as a promising paradigm to control and monitor software-based, "open", communication systems, which play the role of the physical twin (PT). In the general framework presented in this work, the DT builds a Bayesian model of the communication system, which is leveraged to enable core DT functionalities such as control via multi-agent reinforcement learning (MARL) and monitoring of the PT for anomaly detection. We specifically investigate the application of the proposed framework to a simple case-study system encompassing multiple sensing devices that report to a common receiver. The Bayesian model trained at the DT has the key advantage of capturing epistemic uncertainty regarding the communication system, e.g., regarding current traffic conditions, which arise from limited PT-to-DT data transfer. Experimental results validate the effectiveness of the proposed Bayesian framework as compared to standard frequentist model-based solutions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
322,918
2410.05071
Function Gradient Approximation with Random Shallow ReLU Networks with Control Applications
Neural networks are widely used to approximate unknown functions in control. A common neural network architecture uses a single hidden layer (i.e. a shallow network), in which the input parameters are fixed in advance and only the output parameters are trained. The typical formal analysis asserts that if output parameters exist to approximate the unknown function with sufficient accuracy, then desired control performance can be achieved. A long-standing theoretical gap was that no conditions existed to guarantee that, for the fixed input parameters, required accuracy could be obtained by training the output parameters. Our recent work has partially closed this gap by demonstrating that if input parameters are chosen randomly, then for any sufficiently smooth function, with high-probability there are output parameters resulting in $O((1/m)^{1/2})$ approximation errors, where $m$ is the number of neurons. However, some applications, notably continuous-time value function approximation, require that the network approximates the both the unknown function and its gradient with sufficient accuracy. In this paper, we show that randomly generated input parameters and trained output parameters result in gradient errors of $O((\log(m)/m)^{1/2})$, and additionally, improve the constants from our prior work. We show how to apply the result to policy evaluation problems.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
495,553
2501.13887
What Does an Audio Deepfake Detector Focus on? A Study in the Time Domain
Adding explanations to audio deepfake detection (ADD) models will boost their real-world application by providing insight on the decision making process. In this paper, we propose a relevancy-based explainable AI (XAI) method to analyze the predictions of transformer-based ADD models. We compare against standard Grad-CAM and SHAP-based methods, using quantitative faithfulness metrics as well as a partial spoof test, to comprehensively analyze the relative importance of different temporal regions in an audio. We consider large datasets, unlike previous works where only limited utterances are studied, and find that the XAI methods differ in their explanations. The proposed relevancy-based XAI method performs the best overall on a variety of metrics. Further investigation on the relative importance of speech/non-speech, phonetic content, and voice onsets/offsets suggest that the XAI results obtained from analyzing limited utterances don't necessarily hold when evaluated on large datasets.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
526,861
1310.4977
Learning Tensors in Reproducing Kernel Hilbert Spaces with Multilinear Spectral Penalties
We present a general framework to learn functions in tensor product reproducing kernel Hilbert spaces (TP-RKHSs). The methodology is based on a novel representer theorem suitable for existing as well as new spectral penalties for tensors. When the functions in the TP-RKHS are defined on the Cartesian product of finite discrete sets, in particular, our main problem formulation admits as a special case existing tensor completion problems. Other special cases include transfer learning with multimodal side information and multilinear multitask learning. For the latter case, our kernel-based view is instrumental to derive nonlinear extensions of existing model classes. We give a novel algorithm and show in experiments the usefulness of the proposed extensions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
27,856
1409.3021
Semantic web service discovery approaches: overview and limitations
The semantic Web service discovery has been given massive attention within the last few years. With the increasing number of Web services available on the web, looking for a particular service has become very difficult, especially with the evolution of the clients needs. In this context, various approaches to discover semantic Web services have been proposed. In this paper, we compare these approaches in order to assess their maturity and their adaptation to the current domain requirements. The outcome of this comparison will help us to identify the mechanisms that constitute the strengths of the existing approaches, and thereafter will serve as guideline to determine the basis for a discovery approach more adapted to the current context of Web services.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
35,954
2101.11260
Modeling opinion leader's role in the diffusion of innovation
The diffusion of innovations is an important topic for the consumer markets. Early research focused on how innovations spread on the level of the whole society. To get closer to the real world scenarios agent based models (ABM) started focusing on individual-level agents. In our work we will translate an existing ABM that investigates the role of opinion leaders in the process of diffusion of innovations to a new, more expressive platform designed for agent based modeling, GAMA. We will do it to show that taking advantage of new features of the chosen platform should be encouraged when making models in the field of social sciences in the future, because it can be beneficial for the explanatory power of simulation results.
false
false
false
true
true
false
false
false
false
false
false
false
false
true
false
false
false
false
217,215
2411.07595
Entropy Controllable Direct Preference Optimization
In the post-training of large language models (LLMs), Reinforcement Learning from Human Feedback (RLHF) is an effective approach to achieve generation aligned with human preferences. Direct Preference Optimization (DPO) allows for policy training with a simple binary cross-entropy loss without a reward model. The objective of DPO is regularized by reverse KL divergence that encourages mode-seeking fitting to the reference policy. Nonetheless, we indicate that minimizing reverse KL divergence could fail to capture a mode of the reference distribution, which may hurt the policy's performance. Based on this observation, we propose a simple modification to DPO, H-DPO, which allows for control over the entropy of the resulting policy, enhancing the distribution's sharpness and thereby enabling mode-seeking fitting more effectively. In our experiments, we show that H-DPO outperformed DPO across various tasks, demonstrating superior results in pass@$k$ evaluations for mathematical tasks. Moreover, H-DPO is simple to implement, requiring only minor modifications to the loss calculation of DPO, which makes it highly practical and promising for wide-ranging applications in the training of LLMs.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
507,606
1802.03889
Convergence Analysis of Alternating Projection Method for Nonconvex Sets
Alternating projection method has been used in a wide range of engineering applications since it is a gradient-free method (without requiring tuning the step size) and usually has fast speed of convergence. In this paper, we formalize two properties of proper, lower semi-continuous and semi-algebraic sets: the three-point property for all possible iterates and the local contraction property that serves as the non-expensiveness property of the projector but only for the iterates that are close enough to each other. Then by exploiting the geometric properties of the objective function around its critical point, i.e. the Kurdyka-\L{ojasiewicz} (KL) property, we establish a new convergence analysis framework to show that if one set satisfies the three-point property and the other one obeys the local contraction property, the iterates generated by alternating projection method is a convergent sequence and converges to a critical point. We complete this study by providing convergence rate which depends on the explicit expression of the KL exponent. As a byproduct, we use our new analysis framework to recover the linear convergence rate of alternating projection method onto closed convex sets. To illustrate the power of our new framework, we provide new convergence result for a class of concrete applications: alternating projection method for designing structured tight frames that are widely used in sparse representation, compressed sensing and communication.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
90,093
1702.00298
Cascading Failures in Interdependent Systems: Impact of Degree Variability and Dependence
We study cascading failures in a system comprising interdependent networks/systems, in which nodes rely on other nodes both in the same system and in other systems to perform their function. The (inter-)dependence among nodes is modeled using a dependence graph, where the degree vector of a node determines the number of other nodes it can potentially cause to fail in each system through aforementioned dependency. In particular, we examine the impact of the variability and dependence properties of node degrees on the probability of cascading failures. We show that larger variability in node degrees hampers widespread failures in the system, starting with random failures. Similarly, positive correlations in node degrees make it harder to set off an epidemic of failures, thereby rendering the system more robust against random failures.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
67,642
2403.17525
Equipping Sketch Patches with Context-Aware Positional Encoding for Graphic Sketch Representation
The drawing order of a sketch records how it is created stroke-by-stroke by a human being. For graphic sketch representation learning, recent studies have injected sketch drawing orders into graph edge construction by linking each patch to another in accordance to a temporal-based nearest neighboring strategy. However, such constructed graph edges may be unreliable, since a sketch could have variants of drawings. In this paper, we propose a variant-drawing-protected method by equipping sketch patches with context-aware positional encoding (PE) to make better use of drawing orders for learning graphic sketch representation. Instead of injecting sketch drawings into graph edges, we embed these sequential information into graph nodes only. More specifically, each patch embedding is equipped with a sinusoidal absolute PE to highlight the sequential position in the drawing order. And its neighboring patches, ranked by the values of self-attention scores between patch embeddings, are equipped with learnable relative PEs to restore the contextual positions within a neighborhood. During message aggregation via graph convolutional networks, a node receives both semantic contents from patch embeddings and contextual patterns from PEs by its neighbors, arriving at drawing-order-enhanced sketch representations. Experimental results indicate that our method significantly improves sketch healing and controllable sketch synthesis.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
441,501
2109.10691
Query Evaluation in DatalogMTL -- Taming Infinite Query Results
In this paper, we investigate finite representations of DatalogMTL models. First, we discuss sufficient conditions for detecting programs that have finite models. Then, we study infinite models that eventually become constant and introduce sufficient criteria for programs that allow for such representation. We proceed by considering infinite models that are eventually periodic and show that such representation encompasses all DatalogMTL^FP programs, a widely discussed fragment. Finally, we provide a novel algorithm for reasoning over such finite representable programs.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
true
256,713
1902.01520
Contextual Bandits with Continuous Actions: Smoothing, Zooming, and Adapting
We study contextual bandit learning with an abstract policy class and continuous action space. We obtain two qualitatively different regret bounds: one competes with a smoothed version of the policy class under no continuity assumptions, while the other requires standard Lipschitz assumptions. Both bounds exhibit data-dependent "zooming" behavior and, with no tuning, yield improved guarantees for benign problems. We also study adapting to unknown smoothness parameters, establishing a price-of-adaptivity and deriving optimal adaptive algorithms that require no additional information.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
120,674
1907.04629
Evolutionary techniques in lattice sieving algorithms
Lattice-based cryptography has recently emerged as a prominent candidate for secure communication in the quantum age. Its security relies on the hardness of certain lattice problems, and the inability of known lattice algorithms, such as lattice sieving, to solve these problems efficiently. In this paper we investigate the similarities between lattice sieving and evolutionary algorithms, how various improvements to lattice sieving can be viewed as applications of known techniques from evolutionary computation, and how other evolutionary techniques can benefit lattice sieving in practice.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
138,155
2309.08968
Sorted LLaMA: Unlocking the Potential of Intermediate Layers of Large Language Models for Dynamic Inference
Large language models (LLMs) have revolutionized natural language processing (NLP) by excelling at understanding and generating human-like text. However, their widespread deployment can be prohibitively expensive. SortedNet is a recent training technique for enabling dynamic inference by leveraging the modularity in networks and sorting sub-models based on computation/accuracy in a nested manner. We extend SortedNet to generative NLP tasks, making large language models dynamic without any Pre-Training and by only replacing Standard Fine-Tuning (SFT) with Sorted Fine-Tuning (SoFT). Our approach boosts model efficiency, eliminating the need for multiple models for various scenarios during inference. We show that this approach can unlock the power of intermediate layers of transformers in generating the target output. Our sub-models remain integral components of the original model, minimizing storage requirements and transition costs between different computational/latency budgets. The efficacy of our proposed method was demonstrated by applying it to tune LLaMA 2 13B on the Stanford Alpaca dataset for instruction following and TriviaQA for closed-book question answering. Our results show the superior performance of sub-models in comparison to Standard Fine-Tuning and SFT+ICT (Early-Exit), all achieved with efficient tuning and without additional memory usage during inference.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
392,417
2009.01315
When Image Decomposition Meets Deep Learning: A Novel Infrared and Visible Image Fusion Method
Infrared and visible image fusion, as a hot topic in image processing and image enhancement, aims to produce fused images retaining the detail texture information in visible images and the thermal radiation information in infrared images. A critical step for this issue is to decompose features in different scales and to merge them separately. In this paper, we propose a novel dual-stream auto-encoder (AE) based fusion network. The core idea is that the encoder decomposes an image into base and detail feature maps with low- and high-frequency information, respectively, and that the decoder is responsible for the original image reconstruction. To this end, a well-designed loss function is established to make the base/detail feature maps similar/dissimilar. In the test phase, base and detail feature maps are respectively merged via an additional fusion layer, which contains a saliency weighted-based spatial attention module and a channel attention module to adaptively preserve more information from source images and to highlight the objects. Then the fused image is recovered by the decoder. Qualitative and quantitative results demonstrate that our method can generate fusion images containing highlighted targets and abundant detail texture information with strong reproducibility and meanwhile is superior to the state-of-the-art (SOTA) approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
194,269
cs/0508057
On the Performance of Turbo Codes in Quasi-Static Fading Channels
In this paper, we investigate in detail the performance of turbo codes in quasi-static fading channels both with and without antenna diversity. First, we develop a simple and accurate analytic technique to evaluate the performance of turbo codes in quasi-static fading channels. The proposed analytic technique relates the frame error rate of a turbo code to the iterative decoder convergence threshold, rather than to the turbo code distance spectrum. Subsequently, we compare the performance of various turbo codes in quasi-static fading channels. We show that, in contrast to the situation in the AWGN channel, turbo codes with different interleaver sizes or turbo codes based on RSC codes with different constraint lengths and generator polynomials exhibit identical performance. Moreover, we also compare the performance of turbo codes and convolutional codes in quasi-static fading channels under the condition of identical decoding complexity. In particular, we show that turbo codes do not outperform convolutional codes in quasi-static fading channels with no antenna diversity; and that turbo codes only outperform convolutional codes in quasi-static fading channels with antenna diversity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
538,884
2307.04390
CT-based Subchondral Bone Microstructural Analysis in Knee Osteoarthritis via MR-Guided Distillation Learning
Background: MR-based subchondral bone effectively predicts knee osteoarthritis. However, its clinical application is limited by the cost and time of MR. Purpose: We aim to develop a novel distillation-learning-based method named SRRD for subchondral bone microstructural analysis using easily-acquired CT images, which leverages paired MR images to enhance the CT-based analysis model during training. Materials and Methods: Knee joint images of both CT and MR modalities were collected from October 2020 to May 2021. Firstly, we developed a GAN-based generative model to transform MR images into CT images, which was used to establish the anatomical correspondence between the two modalities. Next, we obtained numerous patches of subchondral bone regions of MR images, together with their trabecular parameters (BV / TV, Tb. Th, Tb. Sp, Tb. N) from the corresponding CT image patches via regression. The distillation-learning technique was used to train the regression model and transfer MR structural information to the CT-based model. The regressed trabecular parameters were further used for knee osteoarthritis classification. Results: A total of 80 participants were evaluated. CT-based regression results of trabecular parameters achieved intra-class correlation coefficients (ICCs) of 0.804, 0.773, 0.711, and 0.622 for BV / TV, Tb. Th, Tb. Sp, and Tb. N, respectively. The use of distillation learning significantly improved the performance of the CT-based knee osteoarthritis classification method using the CNN approach, yielding an AUC score of 0.767 (95% CI, 0.681-0.853) instead of 0.658 (95% CI, 0.574-0.742) (p<.001). Conclusions: The proposed SRRD method showed high reliability and validity in MR-CT registration, regression, and knee osteoarthritis classification, indicating the feasibility of subchondral bone microstructural analysis based on CT images.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
378,387
2011.11912
Variational Monocular Depth Estimation for Reliability Prediction
Self-supervised learning for monocular depth estimation is widely investigated as an alternative to supervised learning approach, that requires a lot of ground truths. Previous works have successfully improved the accuracy of depth estimation by modifying the model structure, adding objectives, and masking dynamic objects and occluded area. However, when using such estimated depth image in applications, such as autonomous vehicles, and robots, we have to uniformly believe the estimated depth at each pixel position. This could lead to fatal errors in performing the tasks, because estimated depth at some pixels may make a bigger mistake. In this paper, we theoretically formulate a variational model for the monocular depth estimation to predict the reliability of the estimated depth image. Based on the results, we can exclude the estimated depths with low reliability or refine them for actual use. The effectiveness of the proposed method is quantitatively and qualitatively demonstrated using the KITTI benchmark and Make3D dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
207,990
1804.04694
A Variational U-Net for Conditional Appearance and Shape Generation
Deep generative models have demonstrated great performance in image synthesis. However, results deteriorate in case of spatial deformations, since they generate images of objects directly, rather than modeling the intricate interplay of their inherent shape and appearance. We present a conditional U-Net for shape-guided image generation, conditioned on the output of a variational autoencoder for appearance. The approach is trained end-to-end on images, without requiring samples of the same object with varying pose or appearance. Experiments show that the model enables conditional image generation and transfer. Therefore, either shape or appearance can be retained from a query image, while freely altering the other. Moreover, appearance can be sampled due to its stochastic latent representation, while preserving shape. In quantitative and qualitative experiments on COCO, DeepFashion, shoes, Market-1501 and handbags, the approach demonstrates significant improvements over the state-of-the-art.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
94,914
2104.08415
Risk score learning for COVID-19 contact tracing apps
Digital contact tracing apps for COVID, such as the one developed by Google and Apple, need to estimate the risk that a user was infected during a particular exposure, in order to decide whether to notify the user to take precautions, such as entering into quarantine, or requesting a test. Such risk score models contain numerous parameters that must be set by the public health authority. In this paper, we show how to automatically learn these parameters from data. Our method needs access to exposure and outcome data. Although this data is already being collected (in an aggregated, privacy-preserving way) by several health authorities, in this paper we limit ourselves to simulated data, so that we can systematically study the different factors that affect the feasibility of the approach. In particular, we show that the parameters become harder to estimate when there is more missing data (e.g., due to infections which were not recorded by the app), and when there is model misspecification. Nevertheless, the learning approach outperforms a strong manually designed baseline. Furthermore, the learning approach can adapt even when the risk factors of the disease change, e.g., due to the evolution of new variants, or the adoption of vaccines.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
230,785
1801.02613
Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality
Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction. To better understand such attacks, a characterization is needed of the properties of regions (the so-called 'adversarial subspaces') in which adversarial examples lie. We tackle this challenge by characterizing the dimensional properties of adversarial regions, via the use of Local Intrinsic Dimensionality (LID). LID assesses the space-filling capability of the region surrounding a reference example, based on the distance distribution of the example to its neighbors. We first provide explanations about how adversarial perturbation can affect the LID characteristic of adversarial regions, and then show empirically that LID characteristics can facilitate the distinction of adversarial examples generated using state-of-the-art attacks. As a proof-of-concept, we show that a potential application of LID is to distinguish adversarial examples, and the preliminary results show that it can outperform several state-of-the-art detection measures by large margins for five attack strategies considered in this paper across three benchmark datasets. Our analysis of the LID characteristic for adversarial regions not only motivates new directions of effective adversarial defense, but also opens up more challenges for developing new attacks to better understand the vulnerabilities of DNNs.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
87,955
1303.6020
Multi-Group Testing for Items with Real-Valued Status under Standard Arithmetic
This paper proposes a novel generalization of group testing, called multi-group testing, which relaxes the notion of "testing subset" in group testing to "testing multi-set". The generalization aims to learn more information of each item to be tested rather than identify only defectives as was done in conventional group testing. This paper provides efficient nonadaptive strategies for the multi-group testing problem. The major tool is a new structure, $q$-ary additive $(w,d)$-disjunct matrix, which is a generalization of the well-known binary disjunct matrix introduced by Kautz and Singleton in 1964.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
23,236
2110.07234
On the Stability of Low Pass Graph Filter With a Large Number of Edge Rewires
Recently, the stability of graph filters has been studied as one of the key theoretical properties driving the highly successful graph convolutional neural networks (GCNs). The stability of a graph filter characterizes the effect of topology perturbation on the output of a graph filter, a fundamental building block for GCNs. Many existing results have focused on the regime of small perturbation with a small number of edge rewires. However, the number of edge rewires can be large in many applications. To study the latter case, this work departs from the previous analysis and proves a bound on the stability of graph filter relying on the filter's frequency response. Assuming the graph filter is low pass, we show that the stability of the filter depends on perturbation to the community structure. As an application, we show that for stochastic block model graphs, the graph filter distance converges to zero when the number of nodes approaches infinity. Numerical simulations validate our findings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
260,910
2310.14566
HallusionBench: An Advanced Diagnostic Suite for Entangled Language Hallucination and Visual Illusion in Large Vision-Language Models
We introduce HallusionBench, a comprehensive benchmark designed for the evaluation of image-context reasoning. This benchmark presents significant challenges to advanced large visual-language models (LVLMs), such as GPT-4V(Vision), Gemini Pro Vision, Claude 3, and LLaVA-1.5, by emphasizing nuanced understanding and interpretation of visual data. The benchmark comprises 346 images paired with 1129 questions, all meticulously crafted by human experts. We introduce a novel structure for these visual questions designed to establish control groups. This structure enables us to conduct a quantitative analysis of the models' response tendencies, logical consistency, and various failure modes. In our evaluation on HallusionBench, we benchmarked 15 different models, highlighting a 31.42% question-pair accuracy achieved by the state-of-the-art GPT-4V. Notably, all other evaluated models achieve accuracy below 16%. Moreover, our analysis not only highlights the observed failure modes, including language hallucination and visual illusion, but also deepens an understanding of these pitfalls. Our comprehensive case studies within HallusionBench shed light on the challenges of hallucination and illusion in LVLMs. Based on these insights, we suggest potential pathways for their future improvement. The benchmark and codebase can be accessed at https://github.com/tianyi-lab/HallusionBench.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
401,921
2307.11758
A Comprehensive Introduction of Visual-Inertial Navigation
In this article, a tutorial introduction to visual-inertial navigation(VIN) is presented. Visual and inertial perception are two complementary sensing modalities. Cameras and inertial measurement units (IMU) are the corresponding sensors for these two modalities. The low cost and light weight of camera-IMU sensor combinations make them ubiquitous in robotic navigation. Visual-inertial Navigation is a state estimation problem, that estimates the ego-motion and local environment of the sensor platform. This paper presents visual-inertial navigation in the classical state estimation framework, first illustrating the estimation problem in terms of state variables and system models, including related quantities representations (Parameterizations), IMU dynamic and camera measurement models, and corresponding general probabilistic graphical models (Factor Graph). Secondly, we investigate the existing model-based estimation methodologies, these involve filter-based and optimization-based frameworks and related on-manifold operations. We also discuss the calibration of some relevant parameters, also initialization of state of interest in optimization-based frameworks. Then the evaluation and improvement of VIN in terms of accuracy, efficiency, and robustness are discussed. Finally, we briefly mention the recent development of learning-based methods that may become alternatives to traditional model-based methods.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
381,009
2002.00842
Mi YouTube es Su YouTube? Analyzing the Cultures using YouTube Thumbnails of Popular Videos
YouTube, a world-famous video sharing website, maintains a list of the top trending videos on the platform. Due to its huge amount of users, it enables researchers to understand people's preference by analyzing the trending videos. Trending videos vary from country to country. By analyzing such differences and changes, we can tell how users' preferences differ over locations. Previous work focuses on analyzing such culture preferences from videos' metadata, while the culture information hidden within the visual content has not been discovered. In this study, we explore culture preferences among countries using the thumbnails of YouTube trending videos. We first process the thumbnail images of the videos using object detectors. The collected object information is then used for various statistical analysis. In particular, we examine the data from three perspectives: geographical locations, video genres and users' reactions. Experimental results indicate that the users from similar cultures shares interests in watching similar videos on YouTube. Our study demonstrates that discovering the culture preference through the thumbnails can be an effective mechanism for video social media analysis.
false
false
false
true
false
false
false
false
false
false
false
true
false
true
false
false
false
false
162,491
2404.01991
Kallaama: A Transcribed Speech Dataset about Agriculture in the Three Most Widely Spoken Languages in Senegal
This work is part of the Kallaama project, whose objective is to produce and disseminate national languages corpora for speech technologies developments, in the field of agriculture. Except for Wolof, which benefits from some language data for natural language processing, national languages of Senegal are largely ignored by language technology providers. However, such technologies are keys to the protection, promotion and teaching of these languages. Kallaama focuses on the 3 main spoken languages by Senegalese people: Wolof, Pulaar and Sereer. These languages are widely spoken by the population, with around 10 million of native Senegalese speakers, not to mention those outside the country. However, they remain under-resourced in terms of machine-readable data that can be used for automatic processing and language technologies, all the more so in the agricultural sector. We release a transcribed speech dataset containing 125 hours of recordings, about agriculture, in each of the above-mentioned languages. These resources are specifically designed for Automatic Speech Recognition purpose, including traditional approaches. To build such technologies, we provide textual corpora in Wolof and Pulaar, and a pronunciation lexicon containing 49,132 entries from the Wolof dataset.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
443,676
1909.04885
Addressing Algorithmic Bottlenecks in Elastic Machine Learning with Chicle
Distributed machine learning training is one of the most common and important workloads running on data centers today, but it is rarely executed alone. Instead, to reduce costs, computing resources are consolidated and shared by different applications. In this scenario, elasticity and proper load balancing are vital to maximize efficiency, fairness, and utilization. Currently, most distributed training frameworks do not support the aforementioned properties. A few exceptions that do support elasticity, imitate generic distributed frameworks and use micro-tasks. In this paper we illustrate that micro-tasks are problematic for machine learning applications, because they require a high degree of parallelism which hinders the convergence of distributed training at a pure algorithmic level (i.e., ignoring overheads and scalability limitations). To address this, we propose Chicle, a new elastic distributed training framework which exploits the nature of machine learning algorithms to implement elasticity and load balancing without micro-tasks. We use Chicle to train deep neural network as well as generalized linear models, and show that Chicle achieves performance competitive with state of the art rigid frameworks, while efficiently enabling elastic execution and dynamic load balancing.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
144,931
2003.03612
Frozen Binomials on the Web: Word Ordering and Language Conventions in Online Text
There is inherent information captured in the order in which we write words in a list. The orderings of binomials --- lists of two words separated by `and' or `or' --- has been studied for more than a century. These binomials are common across many areas of speech, in both formal and informal text. In the last century, numerous explanations have been given to describe what order people use for these binomials, from differences in semantics to differences in phonology. These rules describe primarily `frozen' binomials that exist in exactly one ordering and have lacked large-scale trials to determine efficacy. Online text provides a unique opportunity to study these lists in the context of informal text at a very large scale. In this work, we expand the view of binomials to include a large-scale analysis of both frozen and non-frozen binomials in a quantitative way. Using this data, we then demonstrate that most previously proposed rules are ineffective at predicting binomial ordering. By tracking the order of these binomials across time and communities we are able to establish additional, unexplored dimensions central to these predictions. Expanding beyond the question of individual binomials, we also explore the global structure of binomials in various communities, establishing a new model for these lists and analyzing this structure for non-frozen and frozen binomials. Additionally, novel analysis of trinomials --- lists of length three --- suggests that none of the binomials analysis applies in these cases. Finally, we demonstrate how large data sets gleaned from the web can be used in conjunction with older theories to expand and improve on old questions.
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
167,292
2501.12919
Contrastive Language-Structure Pre-training Driven by Materials Science Literature
Understanding structure-property relationships is an essential yet challenging aspect of materials discovery and development. To facilitate this process, recent studies in materials informatics have sought latent embedding spaces of crystal structures to capture their similarities based on properties and functionalities. However, abstract feature-based embedding spaces are human-unfriendly and prevent intuitive and efficient exploration of the vast materials space. Here we introduce Contrastive Language--Structure Pre-training (CLaSP), a learning paradigm for constructing crossmodal embedding spaces between crystal structures and texts. CLaSP aims to achieve material embeddings that 1) capture property- and functionality-related similarities between crystal structures and 2) allow intuitive retrieval of materials via user-provided description texts as queries. To compensate for the lack of sufficient datasets linking crystal structures with textual descriptions, CLaSP leverages a dataset of over 400,000 published crystal structures and corresponding publication records, including paper titles and abstracts, for training. We demonstrate the effectiveness of CLaSP through text-based crystal structure screening and embedding space visualization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
526,486
2202.00530
Coordinated Frequency Control through Safe Reinforcement Learning
With widespread deployment of renewables, the electric power grids are experiencing increasing dynamics and uncertainties, with its secure operation being threatened. Existing frequency control schemes based on day-ahead offline analysis and minute-level online sensitivity calculations are difficult to adapt to rapidly changing system states. In particular, they are unable to facilitate coordinated control of system frequency and power flows. A refined approach and tools are urgently needed to assist system operators to make timely decisions. This paper proposes a novel model-free coordinated frequency control framework based on safe reinforcement learning, with multiple control objectives considered. The load frequency control problem is modeled as a constrained Markov decision process, which can be solved by an AI agent continuously interacting with the grid to achieve sub-second decision making. Extensive numerical experiments conducted at East China Power Grid demonstrate the effectiveness and promise of the proposed method.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
278,176
2109.05257
Towards a Rigorous Evaluation of Time-series Anomaly Detection
In recent years, proposed studies on time-series anomaly detection (TAD) report high F1 scores on benchmark TAD datasets, giving the impression of clear improvements in TAD. However, most studies apply a peculiar evaluation protocol called point adjustment (PA) before scoring. In this paper, we theoretically and experimentally reveal that the PA protocol has a great possibility of overestimating the detection performance; that is, even a random anomaly score can easily turn into a state-of-the-art TAD method. Therefore, the comparison of TAD methods after applying the PA protocol can lead to misguided rankings. Furthermore, we question the potential of existing TAD methods by showing that an untrained model obtains comparable detection performance to the existing methods even when PA is forbidden. Based on our findings, we propose a new baseline and an evaluation protocol. We expect that our study will help a rigorous evaluation of TAD and lead to further improvement in future researches.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
254,726
2111.13445
How Well Do Sparse Imagenet Models Transfer?
Transfer learning is a classic paradigm by which models pretrained on large "upstream" datasets are adapted to yield good results on "downstream" specialized datasets. Generally, more accurate models on the "upstream" dataset tend to provide better transfer accuracy "downstream". In this work, we perform an in-depth investigation of this phenomenon in the context of convolutional neural networks (CNNs) trained on the ImageNet dataset, which have been pruned - that is, compressed by sparsifying their connections. We consider transfer using unstructured pruned models obtained by applying several state-of-the-art pruning methods, including magnitude-based, second-order, re-growth, lottery-ticket, and regularization approaches, in the context of twelve standard transfer tasks. In a nutshell, our study shows that sparse models can match or even outperform the transfer performance of dense models, even at high sparsities, and, while doing so, can lead to significant inference and even training speedups. At the same time, we observe and analyze significant differences in the behaviour of different pruning methods.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
268,294
1509.05506
Energy-Efficient Design of MIMO Heterogeneous Networks with Wireless Backhaul
As future networks aim to meet the ever-increasing requirements of high data rate applications, dense and heterogeneous networks (HetNets) will be deployed to provide better coverage and throughput. Besides the important implications for energy consumption, the trend towards densification calls for more and more wireless links to forward a massive backhaul traffic into the core network. It is critically important to take into account the presence of a wireless backhaul for the energy-efficient design of HetNets. In this paper, we provide a general framework to analyze the energy efficiency of a two-tier MIMO heterogeneous network with wireless backhaul in the presence of both uplink and downlink transmissions. We find that under spatial multiplexing the energy efficiency of a HetNet is sensitive to the network load, and it should be taken into account when controlling the number of users served by each base station. We show that a two-tier HetNet with wireless backhaul can be significantly more energy efficient than a one-tier cellular network. However, this requires the bandwidth division between radio access links and wireless backhaul to be optimally designed according to the load conditions.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
47,057
2311.15531
Sleep When Everything Looks Fine: Self-Triggered Monitoring for Signal Temporal Logic Tasks
Online monitoring is a widely used technique in assessing if the performance of the system satisfies some desired requirements during run-time operation. Existing works on online monitoring usually assume that the monitor can acquire system information periodically at each time instant. However, such a periodic mechanism may be unnecessarily energy-consuming as it essentially requires to turn on sensors consistently. In this paper, we proposed a novel self-triggered mechanism for model-based online monitoring of discrete-time dynamical system under specifications described by signal temporal logic (STL) formulae. Specifically, instead of sampling the system state at each time instant, a self-triggered monitor can actively determine when the next system state is sampled in addition to its monitoring decision regarding the satisfaction of the task. We propose an effective algorithm for synthesizing such a self-triggered monitor that can correctly evaluate a given STL formula on-the-fly while maximizing the time interval between two observations. We show that, compared with the standard online monitor with periodic information, the proposed self-triggered monitor can significantly reduce observation burden while ensuring that no information of the STL formula is lost. Case studies are provided to illustrate the proposed monitoring mechanism.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
410,543
1701.01095
Estimating Quality in Multi-Objective Bandits Optimization
Many real-world applications are characterized by a number of conflicting performance measures. As optimizing in a multi-objective setting leads to a set of non-dominated solutions, a preference function is required for selecting the solution with the appropriate trade-off between the objectives. The question is: how good do estimations of these objectives have to be in order for the solution maximizing the preference function to remain unchanged? In this paper, we introduce the concept of preference radius to characterize the robustness of the preference function and provide guidelines for controlling the quality of estimations in the multi-objective setting. More specifically, we provide a general formulation of multi-objective optimization under the bandits setting. We show how the preference radius relates to the optimal gap and we use this concept to provide a theoretical analysis of the Thompson sampling algorithm from multivariate normal priors. We finally present experiments to support the theoretical results and highlight the fact that one cannot simply scalarize multi-objective problems into single-objective problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
66,358
2410.14118
Skill Generalization with Verbs
It is imperative that robots can understand natural language commands issued by humans. Such commands typically contain verbs that signify what action should be performed on a given object and that are applicable to many objects. We propose a method for generalizing manipulation skills to novel objects using verbs. Our method learns a probabilistic classifier that determines whether a given object trajectory can be described by a specific verb. We show that this classifier accurately generalizes to novel object categories with an average accuracy of 76.69% across 13 object categories and 14 verbs. We then perform policy search over the object kinematics to find an object trajectory that maximizes classifier prediction for a given verb. Our method allows a robot to generate a trajectory for a novel object based on a verb, which can then be used as input to a motion planner. We show that our model can generate trajectories that are usable for executing five verb commands applied to novel instances of two different object categories on a real robot.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
499,876
2209.04881
On The Computational Complexity of Self-Attention
Transformer architectures have led to remarkable progress in many state-of-art applications. However, despite their successes, modern transformers rely on the self-attention mechanism, whose time- and space-complexity is quadratic in the length of the input. Several approaches have been proposed to speed up self-attention mechanisms to achieve sub-quadratic running time; however, the large majority of these works are not accompanied by rigorous error guarantees. In this work, we establish lower bounds on the computational complexity of self-attention in a number of scenarios. We prove that the time complexity of self-attention is necessarily quadratic in the input length, unless the Strong Exponential Time Hypothesis (SETH) is false. This argument holds even if the attention computation is performed only approximately, and for a variety of attention mechanisms. As a complement to our lower bounds, we show that it is indeed possible to approximate dot-product self-attention using finite Taylor series in linear-time, at the cost of having an exponential dependence on the polynomial order.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
316,915
1109.3311
Escort entropies and divergences and related canonical distribution
We discuss two families of two-parameter entropies and divergences, derived from the standard R\'enyi and Tsallis entropies and divergences. These divergences and entropies are found as divergences or entropies of escort distributions. Exploiting the nonnegativity of the divergences, we derive the expression of the canonical distribution associated to the new entropies and a observable given as an escort-mean value. We show that this canonical distribution extends, and smoothly connects, the results obtained in nonextensive thermodynamics for the standard and generalized mean value constraints.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
12,175
2401.16937
Segmentation and Characterization of Macerated Fibers and Vessels Using Deep Learning
Wood comprises different cell types, such as fibers, tracheids and vessels, defining its properties. Studying cells' shape, size, and arrangement in microscopy images is crucial for understanding wood characteristics. Typically, this involves macerating (soaking) samples in a solution to separate cells, then spreading them on slides for imaging with a microscope that covers a wide area, capturing thousands of cells. However, these cells often cluster and overlap in images, making the segmentation difficult and time-consuming using standard image-processing methods. In this work, we developed an automatic deep learning segmentation approach that utilizes the one-stage YOLOv8 model for fast and accurate segmentation and characterization of macerated fiber and vessel form aspen trees in microscopy images. The model can analyze 32,640 x 25,920 pixels images and demonstrate effective cell detection and segmentation, achieving a mAP_{0.5-0.95} of 78 %. To assess the model's robustness, we examined fibers from a genetically modified tree line known for longer fibers. The outcomes were comparable to previous manual measurements. Additionally, we created a user-friendly web application for image analysis and provided the code for use on Google Colab. By leveraging YOLOv8's advances, this work provides a deep learning solution to enable efficient quantification and analysis of wood cells suitable for practical applications.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
425,037
1407.7103
On Joint Source-Channel Coding for Correlated Sources Over Multiple-Access Relay Channels
We study the transmission of correlated sources over discrete memoryless (DM) multiple-access-relay channels (MARCs), in which both the relay and the destination have access to side information arbitrarily correlated with the sources. As the optimal transmission scheme is an open problem, in this work we propose a new joint source-channel coding scheme based on a novel combination of the correlation preserving mapping (CPM) technique with Slepian-Wolf (SW) source coding, and obtain the corresponding sufficient conditions. The proposed coding scheme is based on the decode-and-forward strategy, and utilizes CPM for encoding information simultaneously to the relay and the destination, whereas the cooperation information from the relay is encoded via SW source coding. It is shown that there are cases in which the new scheme strictly outperforms the schemes available in the literature. This is the first instance of a source-channel code that uses CPM for encoding information to two different nodes (relay and destination). In addition to sufficient conditions, we present three different sets of single-letter necessary conditions for reliable transmission of correlated sources over DM MARCs. The newly derived conditions are shown to be at least as tight as the previously known necessary conditions.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
34,911