id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2304.14418
SSTM: Spatiotemporal Recurrent Transformers for Multi-frame Optical Flow Estimation
Inaccurate optical flow estimates in and near occluded regions, and out-of-boundary regions are two of the current significant limitations of optical flow estimation algorithms. Recent state-of-the-art optical flow estimation algorithms are two-frame based methods where optical flow is estimated sequentially for each consecutive image pair in a sequence. While this approach gives good flow estimates, it fails to generalize optical flows in occluded regions mainly due to limited local evidence regarding moving elements in a scene. In this work, we propose a learning-based multi-frame optical flow estimation method that estimates two or more consecutive optical flows in parallel from multi-frame image sequences. Our underlying hypothesis is that by understanding temporal scene dynamics from longer sequences with more than two frames, we can characterize pixel-wise dependencies in a larger spatiotemporal domain, generalize complex motion patterns and thereby improve the accuracy of optical flow estimates in occluded regions. We present learning-based spatiotemporal recurrent transformers for multi-frame based optical flow estimation (SSTMs). Our method utilizes 3D Convolutional Gated Recurrent Units (3D-ConvGRUs) and spatiotemporal transformers to learn recurrent space-time motion dynamics and global dependencies in the scene and provide a generalized optical flow estimation. When compared with recent state-of-the-art two-frame and multi-frame methods on real world and synthetic datasets, performance of the SSTMs were significantly higher in occluded and out-of-boundary regions. Among all published state-of-the-art multi-frame methods, SSTM achieved state-of the-art results on the Sintel Final and KITTI2015 benchmark datasets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
360,953
2406.18535
DRAK: Unlocking Molecular Insights with Domain-Specific Retrieval-Augmented Knowledge in LLMs
Large Language Models (LLMs) encounter challenges with the unique syntax of specific domains, such as biomolecules. Existing fine-tuning or modality alignment techniques struggle to bridge the domain knowledge gap and understand complex molecular data, limiting LLMs' progress in specialized fields. To overcome these limitations, we propose an expandable and adaptable non-parametric knowledge injection framework named Domain-specific Retrieval-Augmented Knowledge (DRAK), aimed at enhancing reasoning capabilities in specific domains. Utilizing knowledge-aware prompts and gold label-induced reasoning, DRAK has developed profound expertise in the molecular domain and the capability to handle a broad spectrum of analysis tasks. We evaluated two distinct forms of DRAK variants, proving that DRAK exceeds previous benchmarks on six molecular tasks within the Mol-Instructions dataset. Extensive experiments have underscored DRAK's formidable performance and its potential to unlock molecular insights, offering a unified paradigm for LLMs to tackle knowledge-intensive tasks in specific domains. Our code will be available soon.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
468,044
2410.03806
Metadata Matters for Time Series: Informative Forecasting with Transformers
Time series forecasting is prevalent in extensive real-world applications, such as financial analysis and energy planning. Previous studies primarily focus on time series modality, endeavoring to capture the intricate variations and dependencies inherent in time series. Beyond numerical time series data, we notice that metadata (e.g.~dataset and variate descriptions) also carries valuable information essential for forecasting, which can be used to identify the application scenario and provide more interpretable knowledge than digit sequences. Inspired by this observation, we propose a Metadata-informed Time Series Transformer (MetaTST), which incorporates multiple levels of context-specific metadata into Transformer forecasting models to enable informative time series forecasting. To tackle the unstructured nature of metadata, MetaTST formalizes them into natural languages by pre-designed templates and leverages large language models (LLMs) to encode these texts into metadata tokens as a supplement to classic series tokens, resulting in an informative embedding. Further, a Transformer encoder is employed to communicate series and metadata tokens, which can extend series representations by metadata information for more accurate forecasting. This design also allows the model to adaptively learn context-specific patterns across various scenarios, which is particularly effective in handling large-scale, diverse-scenario forecasting tasks. Experimentally, MetaTST achieves state-of-the-art compared to advanced time series models and LLM-based methods on widely acknowledged short- and long-term forecasting benchmarks, covering both single-dataset individual and multi-dataset joint training settings.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
494,983
2409.15802
A Multi-Level Approach for Class Imbalance Problem in Federated Learning for Remote Industry 4.0 Applications
Deep neural network (DNN) models are effective solutions for industry 4.0 applications (\eg oil spill detection, fire detection, anomaly detection). However, training a DNN network model needs a considerable amount of data collected from various sources and transferred to the central cloud server that can be expensive and sensitive to privacy. For instance, in the remote offshore oil field where network connectivity is vulnerable, a federated fog environment can be a potential computing platform. Hence it is feasible to perform computation within the federation. On the contrary, performing a DNN model training using fog systems poses a security issue that the federated learning (FL) technique can resolve. In this case, the new challenge is the class imbalance problem that can be inherited in local data sets and can degrade the performance of the global model. Therefore, FL training needs to be performed considering the class imbalance problem locally. In addition, an efficient technique to select the relevant worker model needs to be adopted at the global level to increase the robustness of the global model. Accordingly, we utilize one of the suitable loss functions addressing the class imbalance in workers at the local level. In addition, we employ a dynamic threshold mechanism with user-defined worker's weight to efficiently select workers for aggregation that improve the global model's robustness. Finally, we perform an extensive empirical evaluation to explore the benefits of our solution and find up to 3-5% performance improvement than baseline federated learning methods.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
true
491,069
2106.13956
Solar Irradiation Forecasting using Genetic Algorithms
Renewable energy forecasting is attaining greater importance due to its constant increase in contribution to the electrical power grids. Solar energy is one of the most significant contributors to renewable energy and is dependent on solar irradiation. For the effective management of electrical power grids, forecasting models that predict solar irradiation, with high accuracy, are needed. In the current study, Machine Learning techniques such as Linear Regression, Extreme Gradient Boosting and Genetic Algorithm Optimization are used to forecast solar irradiation. The data used for training and validation is recorded from across three different geographical stations in the United States that are part of the SURFRAD network. A Global Horizontal Index (GHI) is predicted for the models built and compared. Genetic Algorithm Optimization is applied to XGB to further improve the accuracy of solar irradiation prediction.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
243,247
2112.01183
Shallow geothermal energy potential for heating and cooling of buildings with regeneration under climate change scenarios
Shallow ground-source heat pumps (GSHPs) are a promising technology for contributing to the decarbonisation of the energy sector. In heating-dominated climates, the combined use of GSHPs for both heating and cooling increases their technical potential, defined as the maximum energy that can be exchanged with the ground, as the re-injection of excess heat from space cooling leads to a seasonal regeneration of the ground. This paper proposes a new approach to quantify the technical potential of GSHPs, accounting for effects of seasonal regeneration, and to estimate the useful energy to supply building energy demands at regional scale. The useful energy is obtained for direct heat exchange and for district heating and cooling (DHC) under several scenarios for climate change and market penetration levels of cooling systems. The case study in western Switzerland suggests that seasonal regeneration allows for annual maximum heat extraction densities above 300 kWh/m$^2$ at heat injection densities above 330 kWh/m$^2$. Results also show that GSHPs may cover up to 55% of heating demand while covering 57% of service-sector cooling demand for individual GSHPs in 2050, which increases to around 85% with DHC. The regional-scale results may serve to inform decision making on strategic areas for installing GSHPs.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
269,401
2204.10505
Application of Federated Learning in Building a Robust COVID-19 Chest X-ray Classification Model
While developing artificial intelligence (AI)-based algorithms to solve problems, the amount of data plays a pivotal role - large amount of data helps the researchers and engineers to develop robust AI algorithms. In the case of building AI-based models for problems related to medical imaging, these data need to be transferred from the medical institutions where they were acquired to the organizations developing the algorithms. This movement of data involves time-consuming formalities like complying with HIPAA, GDPR, etc.There is also a risk of patients' private data getting leaked, compromising their confidentiality. One solution to these problems is using the Federated Learning framework. Federated Learning (FL) helps AI models to generalize better and create a robust AI model by using data from different sources having different distributions and data characteristics without moving all the data to a central server. In our paper, we apply the FL framework for training a deep learning model to solve a binary classification problem of predicting the presence or absence of COVID-19. We took three different sources of data and trained individual models on each source. Then we trained an FL model on the complete data and compared all the model performances. We demonstrated that the FL model performs better than the individual models. Moreover, the FL model performed at par with the model trained on all the data combined at a central server. Thus Federated Learning leads to generalized AI models without the cost of data transfer and regulatory overhead.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
292,810
2407.00036
LiveData -- A Worldwide Data Mesh for Stratified Data
Data reuse is fundamental for reducing the data integration effort required to build data supporting new applications, especially in data scarcity contexts. However, data reuse requires to deal with data heterogeneity, which is always present in data coming from different sources. Such heterogeneity appears at different levels, like the language used by the data, the structure of the information it represents, and the data types and formats adopted by the datasets. Despite the valuable insights gained by reusing data across contexts, dealing with data heterogeneity is still a high price to pay. Additionally, data reuse is hampered by the lack of data distribution infrastructures supporting the production and distribution of quality and interoperable data. These issues affecting data reuse are amplified considering cross-country data reuse, where geographical and cultural differences are more pronounced. In this paper, we propose LiveData, a cross-country data distribution network handling high quality and diversity-aware data. LiveData is composed by different nodes having an architecture providing components for the generation and distribution of a new type of data, where heterogeneity is transformed into information diversity and considered as a feature, explicitly defined and used to satisfy the data users purposes. This paper presents the specification of the LiveData network, by defining the architecture and the type of data handled by its nodes. This specification is currently being used to implement a concrete use case for data reuse and integration between the University of Trento (Italy) and the National University of Mongolia.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
468,687
2005.14039
3D logic cells design and results based on Vertical NWFET technology including tied compact model
Gate-all-around Vertical Nanowire Field Effect Transistors (VNWFET) are emerging devices, which are well suited to pursue scaling beyond lateral scaling limitations around 7nm. This work explores the relative merits and drawbacks of the technology in the context of logic cell design. We describe a junctionless nanowire technology and associated compact model, which accurately describes fabricated device behavior in all regions of operations for transistors based on between 16 and 625 parallel nanowires of diameters between 22 and 50nm. We used this model to simulate the projected performance of inverter logic gates based on passive load, active load and complementary topologies and carry out an performance exploration for the number of nanowires in transistors. In terms of compactness, through a dedicated full 3D layout design, we also demonstrate a 1.4x reduction in lateral dimensions for the complementary structure with respect to 7nm FinFET-based inverters.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
179,173
1902.05630
Using Key Player Analysis as a Method for Examining the Role of Community Animators in Technology Adoption
This paper examines the role of community animators in technology adoption. Community animators are individuals that actively build social networks and broker ties between nodes in those networks. The present study observes technology adoption patterns through data collected from a mobile application at a local arts festival. A social network was constructed through photo-sharing and interaction within the app. Given this data, we propose the use of key player analysis to identify community animators. In addition, we use a graph invariant (i.e., fragmentation in the network) to describe the role and impact of key players on the full network of interactions. Our results contribute to literature on technology adoption in usability studies by proposing a method to quantify and identify the theoretical concept of community animators. We further analyze the types of community animators to be found in early adoption of technology: the early adopters themselves, and the initiating developers.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
121,586
2311.03420
Text Augmentations with R-drop for Classification of Tweets Self Reporting Covid-19
This paper presents models created for the Social Media Mining for Health 2023 shared task. Our team addressed the first task, classifying tweets that self-report Covid-19 diagnosis. Our approach involves a classification model that incorporates diverse textual augmentations and utilizes R-drop to augment data and mitigate overfitting, boosting model efficacy. Our leading model, enhanced with R-drop and augmentations like synonym substitution, reserved words, and back translations, outperforms the task mean and median scores. Our system achieves an impressive F1 score of 0.877 on the test set.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
405,853
2012.13136
LCEval: Learned Composite Metric for Caption Evaluation
Automatic evaluation metrics hold a fundamental importance in the development and fine-grained analysis of captioning systems. While current evaluation metrics tend to achieve an acceptable correlation with human judgements at the system level, they fail to do so at the caption level. In this work, we propose a neural network-based learned metric to improve the caption-level caption evaluation. To get a deeper insight into the parameters which impact a learned metrics performance, this paper investigates the relationship between different linguistic features and the caption-level correlation of the learned metrics. We also compare metrics trained with different training examples to measure the variations in their evaluation. Moreover, we perform a robustness analysis, which highlights the sensitivity of learned and handcrafted metrics to various sentence perturbations. Our empirical analysis shows that our proposed metric not only outperforms the existing metrics in terms of caption-level correlation but it also shows a strong system-level correlation against human assessments.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
213,124
2108.01987
Core-Stable Committees under Restricted Domains
We study the setting of committee elections, where a group of individuals needs to collectively select a given size subset of available objects. This model is relevant for a number of real-life scenarios including political elections, participatory budgeting, and facility-location. We focus on the core -- the classic notion of proportionality, stability and fairness. We show that for a number of restricted domains including voter-interval, candidate-interval, single-peaked, and single-crossing preferences the core is non-empty and can be found in polynomial time. We show that the core might be empty for strict top-monotonic preferences, yet we introduce a relaxation of this class, which guarantees non-emptiness of the core. Our algorithms work both in the randomized and discrete models. We also show that the classic known proportional rules do not return committees from the core even for the most restrictive domains among those we consider (in particular for 1D-Euclidean preferences). We additionally prove a number of structural results that give better insights into the nature of some of the restricted domains, and which in particular give a better intuitive understanding of the class of top-monotonic preferences.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
249,179
1710.08502
Convolutional Neural Knowledge Graph Learning
Previous models for learning entity and relationship embeddings of knowledge graphs such as TransE, TransH, and TransR aim to explore new links based on learned representations. However, these models interpret relationships as simple translations on entity embeddings. In this paper, we try to learn more complex connections between entities and relationships. In particular, we use a Convolutional Neural Network (CNN) to learn entity and relationship representations in knowledge graphs. In our model, we treat entities and relationships as one-dimensional numerical sequences with the same length. After that, we combine each triplet of head, relationship, and tail together as a matrix with height 3. CNN is applied to the triplets to get confidence scores. Positive and manually corrupted negative triplets are used to train the embeddings and the CNN model simultaneously. Experimental results on public benchmark datasets show that the proposed model outperforms state-of-the-art models on exploring unseen relationships, which proves that CNN is effective to learn complex interactive patterns between entities and relationships.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
83,087
0910.4012
Singularity Analysis of Lower-Mobility Parallel Manipulators Using Grassmann-Cayley Algebra
This paper introduces a methodology to analyze geometrically the singularities of manipulators, of which legs apply both actuation forces and constraint moments to their moving platform. Lower-mobility parallel manipulators and parallel manipulators, of which some legs do not have any spherical joint, are such manipulators. The geometric conditions associated with the dependency of six Pl\"ucker vectors of finite lines or lines at infinity constituting the rows of the inverse Jacobian matrix are formulated using Grassmann-Cayley Algebra. Accordingly, the singularity conditions are obtained in vector form. This study is illustrated with the singularity analysis of four manipulators.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
4,775
2412.00731
Refine3DNet: Scaling Precision in 3D Object Reconstruction from Multi-View RGB Images using Attention
Generating 3D models from multi-view 2D RGB images has gained significant attention, extending the capabilities of technologies like Virtual Reality, Robotic Vision, and human-machine interaction. In this paper, we introduce a hybrid strategy combining CNNs and transformers, featuring a visual auto-encoder with self-attention mechanisms and a 3D refiner network, trained using a novel Joint Train Separate Optimization (JTSO) algorithm. Encoded features from unordered inputs are transformed into an enhanced feature map by the self-attention layer, decoded into an initial 3D volume, and further refined. Our network generates 3D voxels from single or multiple 2D images from arbitrary viewpoints. Performance evaluations using the ShapeNet datasets show that our approach, combined with JTSO, outperforms state-of-the-art techniques in single and multi-view 3D reconstruction, achieving the highest mean intersection over union (IOU) scores, surpassing other models by 4.2% in single-view reconstruction.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
512,790
2402.08478
A precise bare simulation approach to the minimization of some distances. II. Further Foundations
The constrained minimization (respectively maximization) of directed distances and of related generalized entropies is a fundamental task in information theory as well as in the adjacent fields of statistics, machine learning, artificial intelligence, signal processing and pattern recognition. In our previous paper "A precise bare simulation approach to the minimization of some distances. I. Foundations", we obtained such kind of constrained optima by a new dimension-free precise bare (pure) simulation method, provided basically that (i) the underlying directed distance is of f-divergence type, and that (ii) this can be connected to a light-tailed probability distribution in a certain manner. In the present paper, we extend this approach such that constrained optimization problems of a very huge amount of directed distances and generalized entropies -- and beyond -- can be tackled by a newly developed dimension-free extended bare simulation method, for obtaining both optima as well as optimizers. Almost no assumptions (like convexity) on the set of constraints are needed, within our discrete setup of arbitrary dimension, and our method is precise (i.e., converges in the limit). For instance, we cover constrained optimizations of arbitrary f-divergences, Bregman distances, scaled Bregman distances and weighted Euclidean distances. The potential for wide-spread applicability is indicated, too; in particular, we deliver many recent references for uses of the involved distances/divergences in various different research fields (which may also serve as an interdisciplinary interface).
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
429,113
1409.6654
Bayesian Error Based Sequences of Mutual Information Bounds
The inverse relation between mutual information (MI) and Bayesian error is sharpened by deriving finite sequences of upper and lower bounds on MI in terms of the minimum probability of error (MPE) and related Bayesian quantities. The well known Fano upper bound and Feder-Merhav lower bound on equivocation are tightened by including a succession of posterior probabilities starting at the largest, which directly controls the MPE, and proceeding to successively lower ones. A number of other interesting results are also derived, including a sequence of upper bounds on the MPE in terms of a previously introduced sequence of generalized posterior distributions. The tightness of the various bounds is illustrated for a simple application of joint spatial localization and spectral typing of a point source.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
36,263
2411.01523
SinaTools: Open Source Toolkit for Arabic Natural Language Processing
We introduce SinaTools, an open-source Python package for Arabic natural language processing and understanding. SinaTools is a unified package allowing people to integrate it into their system workflow, offering solutions for various tasks such as flat and nested Named Entity Recognition (NER), fully-flagged Word Sense Disambiguation (WSD), Semantic Relatedness, Synonymy Extractions and Evaluation, Lemmatization, Part-of-speech Tagging, Root Tagging, and additional helper utilities such as corpus processing, text stripping methods, and diacritic-aware word matching. This paper presents SinaTools and its benchmarking results, demonstrating that SinaTools outperforms all similar tools on the aforementioned tasks, such as Flat NER (87.33%), Nested NER (89.42%), WSD (82.63%), Semantic Relatedness (0.49 Spearman rank), Lemmatization (90.5%), POS tagging (97.5%), among others. SinaTools can be downloaded from (https://sina.birzeit.edu/sinatools).
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
505,110
2405.18339
What characteristics define disinformation and fake news?: review of taxonomies and definitions
What characteristics define disinformation and fake news? To address this research question, this Technical Note provides a comprehensive analysis of disinformation and fake news, synthesizing 46 definitions and highlighting four key points addressing their fundamental characteristics. Adopting the Prisma 2020 method, five search sets with the Boolean operator AND were selected in both Portuguese and English, which were applied across four databases, resulting in 237 reviewed articles. Following a meticulous analysis, relevant articles were identified and included, while duplicates and inaccessible documents were excluded. It points to disinformation as information that is totally or partially false, crafted by a sender with the aim of misleading, with opportunistic content designed to manipulate reality, being amplified by individual characteristics of the receiver in their interpretation and by contextual factors in which they are embedded. This Technical Note seeks to contribute to an understanding of the phenomenon of disinformation that includes the contextual dimension, obtaining as fundamental elements of analysis: I.) Sender; II.) Content; III.) Receiver; and IV.) Environment.
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
458,381
2306.01014
Functional Ghobber-Jaming Uncertainty Principle
Let $(\{f_j\}_{j=1}^n, \{\tau_j\}_{j=1}^n)$ and $(\{g_k\}_{k=1}^n, \{\omega_k\}_{k=1}^n)$ be two p-orthonormal bases for a finite dimensional Banach space $\mathcal{X}$. Let $M,N\subseteq \{1, \dots, n\}$ be such that \begin{align*} o(M)^\frac{1}{q}o(N)^\frac{1}{p}< \frac{1}{\displaystyle \max_{1\leq j,k\leq n}|g_k(\tau_j) |}, \end{align*} where $q$ is the conjugate index of $p$. Then for all $x \in \mathcal{X}$, we show that \begin{align}\label{FGJU} (1) \quad \quad \quad \quad \|x\|\leq \left(1+\frac{1}{1-o(M)^\frac{1}{q}o(N)^\frac{1}{p}\displaystyle\max_{1\leq j,k\leq n}|g_k(\tau_j)|}\right)\left[\left(\sum_{j\in M^c}|f_j(x)|^p\right)^\frac{1}{p}+\left(\sum_{k\in N^c}|g_k(x) |^p\right)^\frac{1}{p}\right]. \end{align} We call Inequality (1) as \textbf{Functional Ghobber-Jaming Uncertainty Principle}. Inequality (1) improves the uncertainty principle obtained by Ghobber and Jaming \textit{[Linear Algebra Appl., 2011]}.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
370,265
2209.10080
Deep Double Descent via Smooth Interpolation
The ability of overparameterized deep networks to interpolate noisy data, while at the same time showing good generalization performance, has been recently characterized in terms of the double descent curve for the test error. Common intuition from polynomial regression suggests that overparameterized networks are able to sharply interpolate noisy data, without considerably deviating from the ground-truth signal, thus preserving generalization ability. At present, a precise characterization of the relationship between interpolation and generalization for deep networks is missing. In this work, we quantify sharpness of fit of the training data interpolated by neural network functions, by studying the loss landscape w.r.t. to the input variable locally to each training point, over volumes around cleanly- and noisily-labelled training samples, as we systematically increase the number of model parameters and training epochs. Our findings show that loss sharpness in the input space follows both model- and epoch-wise double descent, with worse peaks observed around noisy labels. While small interpolating models sharply fit both clean and noisy data, large interpolating models express a smooth loss landscape, where noisy targets are predicted over large volumes around training data points, in contrast to existing intuition.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
318,737
2402.04408
Detection Transformer for Teeth Detection, Segmentation, and Numbering in Oral Rare Diseases: Focus on Data Augmentation and Inpainting Techniques
In this work, we focused on deep learning image processing in the context of oral rare diseases, which pose challenges due to limited data availability. A crucial step involves teeth detection, segmentation and numbering in panoramic radiographs. To this end, we used a dataset consisting of 156 panoramic radiographs from individuals with rare oral diseases and labeled by experts. We trained the Detection Transformer (DETR) neural network for teeth detection, segmentation, and numbering the 52 teeth classes. In addition, we used data augmentation techniques, including geometric transformations. Finally, we generated new panoramic images using inpainting techniques with stable diffusion, by removing teeth from a panoramic radiograph and integrating teeth into it. The results showed a mAP exceeding 0,69 for DETR without data augmentation. The mAP was improved to 0,82 when data augmentation techniques are used. Furthermore, we observed promising performances when using new panoramic radiographs generated with inpainting technique, with mAP of 0,76.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
427,435
2001.01354
CAE-LO: LiDAR Odometry Leveraging Fully Unsupervised Convolutional Auto-Encoder for Interest Point Detection and Feature Description
As an important technology in 3D mapping, autonomous driving, and robot navigation, LiDAR odometry is still a challenging task. Appropriate data structure and unsupervised deep learning are the keys to achieve an easy adjusted LiDAR odometry solution with high performance. Utilizing compact 2D structured spherical ring projection model and voxel model which preserves the original shape of input data, we propose a fully unsupervised Convolutional Auto-Encoder based LiDAR Odometry (CAE-LO) that detects interest points from spherical ring data using 2D CAE and extracts features from multi-resolution voxel model using 3D CAE. We make several key contributions: 1) experiments based on KITTI dataset show that our interest points can capture more local details to improve the matching success rate on unstructured scenarios and our features outperform state-of-the-art by more than 50% in matching inlier ratio; 2) besides, we also propose a keyframe selection method based on matching pairs transferring, an odometry refinement method for keyframes based on extended interest points from spherical rings, and a backward pose update method. The odometry refinement experiments verify the proposed ideas' feasibility and effectiveness.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
159,474
2110.07238
On the difficulty of learning chaotic dynamics with RNNs
Recurrent neural networks (RNNs) are wide-spread machine learning tools for modeling sequential and time series data. They are notoriously hard to train because their loss gradients backpropagated in time tend to saturate or diverge during training. This is known as the exploding and vanishing gradient problem. Previous solutions to this issue either built on rather complicated, purpose-engineered architectures with gated memory buffers, or - more recently - imposed constraints that ensure convergence to a fixed point or restrict (the eigenspectrum of) the recurrence matrix. Such constraints, however, convey severe limitations on the expressivity of the RNN. Essential intrinsic dynamics such as multistability or chaos are disabled. This is inherently at disaccord with the chaotic nature of many, if not most, time series encountered in nature and society. It is particularly problematic in scientific applications where one aims to reconstruct the underlying dynamical system. Here we offer a comprehensive theoretical treatment of this problem by relating the loss gradients during RNN training to the Lyapunov spectrum of RNN-generated orbits. We mathematically prove that RNNs producing stable equilibrium or cyclic behavior have bounded gradients, whereas the gradients of RNNs with chaotic dynamics always diverge. Based on these analyses and insights we suggest ways of how to optimize the training process on chaotic data according to the system's Lyapunov spectrum, regardless of the employed RNN architecture.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
260,912
1909.00440
Can A User Anticipate What Her Followers Want?
Whenever a social media user decides to share a story, she is typically pleased to receive likes, comments, shares, or, more generally, feedback from her followers. As a result, she may feel compelled to use the feedback she receives to (re-)estimate her followers' preferences and decides which stories to share next to receive more (positive) feedback. Under which conditions can she succeed? In this work, we first look into this problem from a theoretical perspective and then provide a set of practical algorithms to identify and characterize such behavior in social media. More specifically, we address the above problem from the viewpoint of sequential decision making and utility maximization. For a wide variety of utility functions, we first show that, to succeed, a user needs to actively trade off exploitation-- sharing stories which lead to more (positive) feedback--and exploration-- sharing stories to learn about her followers' preferences. However, exploration is not necessary if a user utilizes the feedback her followers provide to other users in addition to the feedback she receives. Then, we develop a utility estimation framework for observation data, which relies on statistical hypothesis testing to determine whether a user utilizes the feedback she receives from each of her followers to decide what to post next. Experiments on synthetic data illustrate our theoretical findings and show that our estimation framework is able to accurately recover users' underlying utility functions. Experiments on several real datasets gathered from Twitter and Reddit reveal that up to 82% (43%) of the Twitter (Reddit) users in our datasets do use the feedback they receive to decide what to post next.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
143,636
1904.10656
Mapping Hearthstone Deck Spaces through MAP-Elites with Sliding Boundaries
Quality diversity (QD) algorithms such as MAP-Elites have emerged as a powerful alternative to traditional single-objective optimization methods. They were initially applied to evolutionary robotics problems such as locomotion and maze navigation, but have yet to see widespread application. We argue that these algorithms are perfectly suited to the rich domain of video games, which contains many relevant problems with a multitude of successful strategies and often also multiple dimensions along which solutions can vary. This paper introduces a novel modification of the MAP-Elites algorithm called MAP-Elites with Sliding Boundaries (MESB) and applies it to the design and rebalancing of Hearthstone, a popular collectible card game chosen for its number of multidimensional behavior features relevant to particular styles of play. To avoid overpopulating cells with conflated behaviors, MESB slides the boundaries of cells based on the distribution of evolved individuals. Experiments in this paper demonstrate the performance of MESB in Hearthstone. Results suggest MESB finds diverse ways of playing the game well along the selected behavioral dimensions. Further analysis of the evolved strategies reveals common patterns that recur across behavioral dimensions and explores how MESB can help rebalance the game.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
128,681
2502.12185
Large Language Models for Extrapolative Modeling of Manufacturing Processes
Conventional predictive modeling of parametric relationships in manufacturing processes is limited by the subjectivity of human expertise and intuition on the one hand and by the cost and time of experimental data generation on the other hand. This work addresses this issue by establishing a new Large Language Model (LLM) framework. The novelty lies in combining automatic extraction of process-relevant knowledge embedded in the literature with iterative model refinement based on a small amount of experimental data. This approach is evaluated on three distinct manufacturing processes that are based on machining, deformation, and additive principles. The results show that for the same small experimental data budget the models derived by our framework have unexpectedly high extrapolative performance, often surpassing the capabilities of conventional Machine Learning. Further, our approach eliminates manual generation of initial models or expertise-dependent interpretation of the literature. The results also reveal the importance of the nature of the knowledge extracted from the literature and the significance of both the knowledge extraction and model refinement components.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
534,739
2304.03812
High-order Spatial Interactions Enhanced Lightweight Model for Optical Remote Sensing Image-based Small Ship Detection
Accurate and reliable optical remote sensing image-based small-ship detection is crucial for maritime surveillance systems, but existing methods often struggle with balancing detection performance and computational complexity. In this paper, we propose a novel lightweight framework called \textit{HSI-ShipDetectionNet} that is based on high-order spatial interactions and is suitable for deployment on resource-limited platforms, such as satellites and unmanned aerial vehicles. HSI-ShipDetectionNet includes a prediction branch specifically for tiny ships and a lightweight hybrid attention block for reduced complexity. Additionally, the use of a high-order spatial interactions module improves advanced feature understanding and modeling ability. Our model is evaluated using the public Kaggle marine ship detection dataset and compared with multiple state-of-the-art models including small object detection models, lightweight detection models, and ship detection models. The results show that HSI-ShipDetectionNet outperforms the other models in terms of recall, and mean average precision (mAP) while being lightweight and suitable for deployment on resource-limited platforms.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
356,952
1805.08102
PiPs: a Kernel-based Optimization Scheme for Analyzing Non-Stationary 1D Signals
This paper proposes a novel kernel-based optimization scheme to handle tasks in the analysis, e.g., signal spectral estimation and single-channel source separation of 1D non-stationary oscillatory data. The key insight of our optimization scheme for reconstructing the time-frequency information is that when a nonparametric regression is applied on some input values, the output regressed points would lie near the oscillatory pattern of the oscillatory 1D signal only if these input values are a good approximation of the ground-truth phase function. In this work, Gaussian Process (GP) is chosen to conduct this nonparametric regression: the oscillatory pattern is encoded as the Pattern-inducing Points (PiPs) which act as the training data points in the GP regression; while the targeted phase function is fed in to compute the correlation kernels, acting as the testing input. Better approximated phase function generates more precise kernels, thus resulting in smaller optimization loss error when comparing the kernel-based regression output with the original signals. To the best of our knowledge, this is the first algorithm that can satisfactorily handle fully non-stationary oscillatory data, close and crossover frequencies, and general oscillatory patterns. Even in the example of a signal {produced by slow variation in the parameters of a trigonometric expansion}, we show that PiPs admits competitive or better performance in terms of accuracy and robustness than existing state-of-the-art algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
98,049
2010.09572
Teacher-Student Competition for Unsupervised Domain Adaptation
With the supervision from source domain only in class-level, existing unsupervised domain adaptation (UDA) methods mainly learn the domain-invariant representations from a shared feature extractor, which causes the source-bias problem. This paper proposes an unsupervised domain adaptation approach with Teacher-Student Competition (TSC). In particular, a student network is introduced to learn the target-specific feature space, and we design a novel competition mechanism to select more credible pseudo-labels for the training of student network. We introduce a teacher network with the structure of existing conventional UDA method, and both teacher and student networks compete to provide target pseudo-labels to constrain every target sample's training in student network. Extensive experiments demonstrate that our proposed TSC framework significantly outperforms the state-of-the-art domain adaptation methods on Office-31 and ImageCLEF-DA benchmarks.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
201,596
1108.2462
Enhanced public key security for the McEliece cryptosystem
This paper studies a variant of the McEliece cryptosystem able to ensure that the code used as the public key is no longer permutation-equivalent to the secret code. This increases the security level of the public key, thus opening the way for reconsidering the adoption of classical families of codes, like Reed-Solomon codes, that have been longly excluded from the McEliece cryptosystem for security reasons. It is well known that codes of these classes are able to yield a reduction in the key size or, equivalently, an increased level of security against information set decoding; so, these are the main advantages of the proposed solution. We also describe possible vulnerabilities and attacks related to the considered system, and show what design choices are best suited to avoid them.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
11,628
cs/9712101
When Gravity Fails: Local Search Topology
Local search algorithms for combinatorial search problems frequently encounter a sequence of states in which it is impossible to improve the value of the objective function; moves through these regions, called plateau moves, dominate the time spent in local search. We analyze and characterize plateaus for three different classes of randomly generated Boolean Satisfiability problems. We identify several interesting features of plateaus that impact the performance of local search algorithms. We show that local minima tend to be small but occasionally may be very large. We also show that local minima can be escaped without unsatisfying a large number of clauses, but that systematically searching for an escape route may be computationally expensive if the local minimum is large. We show that plateaus with exits, called benches, tend to be much larger than minima, and that some benches have very few exit states which local search can use to escape. We show that the solutions (i.e., global minima) of randomly generated problem instances form clusters, which behave similarly to local minima. We revisit several enhancements of local search algorithms and explain their performance in light of our results. Finally we discuss strategies for creating the next generation of local search algorithms.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
540,372
1802.06768
Design and software implementation of subsystems for creating and using the ontological base of a research scientist
Creation of the information systems and tools for scientific research and development support has always been one of the central directions of the development of computer science. The main features of the modern evolution of scientific research and development are the transdisciplinary approach and the deep intellectualisation of all stages of the life cycle of formulation and solution of scientific problems. The theoretical and practical aspects of the development of perspective complex knowledge-oriented information systems and their components are considered in the paper. The analysis of existing scientific information systems (or current research information systems, CRIS) and synthesis of general principles of design of the research and development workstation environment of a researcher and its components are carried out in the work. The functional components of knowledge-oriented information system research and development workstation environment of a researcher are designed. Designed and developed functional components of knowledge-oriented information system developing research and development workstation environment,including functional models and software implementation of the software subsystem for creation and use of ontological knowledge base for research fellow publications, as part of personalized knowledge base of scientific researcher. Research in modern conditions of e-Science paradigm requires pooling scientific community and intensive exchange of research results that may be achieved through the use of scientific information systems. research and development workstation environment allows to solve problems of contructivisation and formalisation of knowledge representation, obtained during the research process and collective accomplices interaction.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
90,740
2107.02655
Automatic size and pose homogenization with spatial transformer network to improve and accelerate pediatric segmentation
Due to a high heterogeneity in pose and size and to a limited number of available data, segmentation of pediatric images is challenging for deep learning methods. In this work, we propose a new CNN architecture that is pose and scale invariant thanks to the use of Spatial Transformer Network (STN). Our architecture is composed of three sequential modules that are estimated together during training: (i) a regression module to estimate a similarity matrix to normalize the input image to a reference one; (ii) a differentiable module to find the region of interest to segment; (iii) a segmentation module, based on the popular UNet architecture, to delineate the object. Unlike the original UNet, which strives to learn a complex mapping, including pose and scale variations, from a finite training dataset, our segmentation module learns a simpler mapping focusing on images with normalized pose and size. Furthermore, the use of an automatic bounding box detection through STN allows saving time and especially memory, while keeping similar performance. We test the proposed method in kidney and renal tumor segmentation on abdominal pediatric CT scanners. Results indicate that the estimated STN homogenization of size and pose accelerates the segmentation (25h), compared to standard data-augmentation (33h), while obtaining a similar quality for the kidney (88.01\% of Dice score) and improving the renal tumor delineation (from 85.52\% to 87.12\%).
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
244,894
2412.16476
Query Quantized Neural SLAM
Neural implicit representations have shown remarkable abilities in jointly modeling geometry, color, and camera poses in simultaneous localization and mapping (SLAM). Current methods use coordinates, positional encodings, or other geometry features as input to query neural implicit functions for signed distances and color which produce rendering errors to drive the optimization in overfitting image observations. However, due to the run time efficiency requirement in SLAM systems, we are merely allowed to conduct optimization on each frame in few iterations, which is far from enough for neural networks to overfit these queries. The underfitting usually results in severe drifts in camera tracking and artifacts in reconstruction. To resolve this issue, we propose query quantized neural SLAM which uses quantized queries to reduce variations of input for much easier and faster overfitting a frame. To this end, we quantize a query into a discrete representation with a set of codes, and only allow neural networks to observe a finite number of variations. This allows neural networks to become increasingly familiar with these codes after overfitting more and more previous frames. Moreover, we also introduce novel initialization, losses, and argumentation to stabilize the optimization with significant uncertainty in the early optimization stage, constrain the optimization space, and estimate camera poses more accurately. We justify the effectiveness of each design and report visual and numerical comparisons on widely used benchmarks to show our superiority over the latest methods in both reconstruction and camera tracking.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
519,539
2002.05294
Cooperative Observation of Targets moving over a Planar Graph with Prediction of Positions
Consider a team with two types of agents: targets and observers. Observers are aerial UAVs that observe targets moving on land with their movements restricted to the paths that form a planar graph on the surface. Observers have limited range of vision and targets do not avoid observers. The objective is to maximize the integral of the number of targets observed in the observation interval. Taking advantage of the fact that the future positions of targets in the short term are predictable, we show in this article a modified hill climbing algorithm that surpasses its previous versions in this new setting of the CTO problem.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
163,852
2110.08307
GrowSpace: Learning How to Shape Plants
Plants are dynamic systems that are integral to our existence and survival. Plants face environment changes and adapt over time to their surrounding conditions. We argue that plant responses to an environmental stimulus are a good example of a real-world problem that can be approached within a reinforcement learning (RL)framework. With the objective of controlling a plant by moving the light source, we propose GrowSpace, as a new RL benchmark. The back-end of the simulator is implemented using the Space Colonisation Algorithm, a plant growing model based on competition for space. Compared to video game RL environments, this simulator addresses a real-world problem and serves as a test bed to visualize plant growth and movement in a faster way than physical experiments. GrowSpace is composed of a suite of challenges that tackle several problems such as control, multi-stage learning,fairness and multi-objective learning. We provide agent baselines alongside case studies to demonstrate the difficulty of the proposed benchmark.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
261,337
2311.00087
Seeking Truth and Beauty in Flavor Physics with Machine Learning
The discovery process of building new theoretical physics models involves the dual aspect of both fitting to the existing experimental data and satisfying abstract theorists' criteria like beauty, naturalness, etc. We design loss functions for performing both of those tasks with machine learning techniques. We use the Yukawa quark sector as a toy example to demonstrate that the optimization of these loss functions results in true and beautiful models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
404,504
2403.19154
STaR-GATE: Teaching Language Models to Ask Clarifying Questions
When prompting language models to complete a task, users often leave important aspects unsaid. While asking questions could resolve this ambiguity (GATE; Li et al., 2023), models often struggle to ask good questions. We explore a language model's ability to self-improve (STaR; Zelikman et al., 2022) by rewarding the model for generating useful questions-a simple method we dub STaR-GATE. We generate a synthetic dataset of 25,500 unique persona-task prompts to simulate conversations between a pretrained language model-the Questioner-and a Roleplayer whose preferences are unknown to the Questioner. By asking questions, the Questioner elicits preferences from the Roleplayer. The Questioner is iteratively finetuned on questions that increase the probability of high-quality responses to the task, which are generated by an Oracle with access to the Roleplayer's latent preferences. After two iterations of self-improvement, the Questioner asks better questions, allowing it to generate responses that are preferred over responses from the initial model on 72% of tasks. Our results indicate that teaching a language model to ask better questions leads to better personalized responses.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
442,231
2104.04026
Fast Regression of the Tritium Breeding Ratio in Fusion Reactors
The tritium breeding ratio (TBR) is an essential quantity for the design of modern and next-generation D-T fueled nuclear fusion reactors. Representing the ratio between tritium fuel generated in breeding blankets and fuel consumed during reactor runtime, the TBR depends on reactor geometry and material properties in a complex manner. In this work, we explored the training of surrogate models to produce a cheap but high-quality approximation for a Monte Carlo TBR model in use at the UK Atomic Energy Authority. We investigated possibilities for dimensional reduction of its feature space, reviewed 9 families of surrogate models for potential applicability, and performed hyperparameter optimisation. Here we present the performance and scaling properties of these models, the fastest of which, an artificial neural network, demonstrated $R^2=0.985$ and a mean prediction time of $0.898\ \mu\mathrm{s}$, representing a relative speedup of $8\cdot 10^6$ with respect to the expensive MC model. We further present a novel adaptive sampling algorithm, Quality-Adaptive Surrogate Sampling, capable of interfacing with any of the individually studied surrogates. Our preliminary testing on a toy TBR theory has demonstrated the efficacy of this algorithm for accelerating the surrogate modelling process.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
229,256
1807.11433
REFUGE CHALLENGE 2018-Task 2:Deep Optic Disc and Cup Segmentation in Fundus Images Using U-Net and Multi-scale Feature Matching Networks
In this paper, an optic disc and cup segmentation method is proposed using U-Net followed by a multi-scale feature matching network. The proposed method targets task 2 of the REFUGE challenge 2018. In order to solve the segmentation problem of task 2, we firstly crop the input image using single shot multibox detector (SSD). The cropped image is then passed to an encoder-decoder network with skip connections also known as generator. Afterwards, both the ground truth and generated images are fed to a convolution neural network (CNN) to extract their multi-level features. A dice loss function is then used to match the features of the two images by minimizing the error at each layer. The aggregation of error from each layer is back-propagated through the generator network to enforce it to generate a segmented image closer to the ground truth. The CNN network improves the performance of the generator network without increasing the complexity of the model.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
104,177
2303.17459
coExplore: Combining multiple rankings for multi-robot exploration
Multi-robot exploration is a field which tackles the challenge of exploring a previously unknown environment with a number of robots. This is especially relevant for search and rescue operations where time is essential. Current state of the art approaches are able to explore a given environment with a large number of robots by assigning them to frontiers. However, this assignment generally favors large frontiers and hence omits potentially valuable medium-sized frontiers. In this paper we showcase a novel multi-robot exploration algorithm, which improves and adapts the existing approaches. Through the addition of information gain based ranking we improve the exploration time for closed urban environments while maintaining similar exploration performance compared to the state-of-the-art for open environments. Accompanying this paper, we further publish our research code in order to lower the barrier to entry for further multi-robot exploration research. We evaluate the performance in three simulated scenarios, two urban and one open scenario, where our algorithm outperforms the state of the art by 5% overall.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
355,206
2110.11918
MIGS: Meta Image Generation from Scene Graphs
Generation of images from scene graphs is a promising direction towards explicit scene generation and manipulation. However, the images generated from the scene graphs lack quality, which in part comes due to high difficulty and diversity in the data. We propose MIGS (Meta Image Generation from Scene Graphs), a meta-learning based approach for few-shot image generation from graphs that enables adapting the model to different scenes and increases the image quality by training on diverse sets of tasks. By sampling the data in a task-driven fashion, we train the generator using meta-learning on different sets of tasks that are categorized based on the scene attributes. Our results show that using this meta-learning approach for the generation of images from scene graphs achieves state-of-the-art performance in terms of image quality and capturing the semantic relationships in the scene. Project Website: https://migs2021.github.io/
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
262,650
2107.11156
Training a neural network with exciton-polariton optical nonlinearity
In contrast to software simulations of neural networks, hardware implementations have often limited or no tunability. While such networks promise great improvements in terms of speed and energy efficiency, their performance is limited by the difficulty to apply efficient training. We propose and realize experimentally an optical system where highly efficient backpropagation training can be applied through an array of highly nonlinear, non-tunable nodes. The system includes exciton-polariton nodes realizing nonlinear activation functions. We demonstrate a high classification accuracy in the MNIST handwritten digit benchmark in a single hidden layer system.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
247,517
2204.11846
Graphical Residual Flows
Graphical flows add further structure to normalizing flows by encoding non-trivial variable dependencies. Previous graphical flow models have focused primarily on a single flow direction: the normalizing direction for density estimation, or the generative direction for inference. However, to use a single flow to perform tasks in both directions, the model must exhibit stable and efficient flow inversion. This work introduces graphical residual flows, a graphical flow based on invertible residual networks. Our approach to incorporating dependency information in the flow, means that we are able to calculate the Jacobian determinant of these flows exactly. Our experiments confirm that graphical residual flows provide stable and accurate inversion that is also more time-efficient than alternative flows with similar task performance. Furthermore, our model provides performance competitive with other graphical flows for both density estimation and inference tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
293,289
2411.15717
Learning Algorithm Hyperparameters for Fast Parametric Convex Optimization
We introduce a machine-learning framework to learn the hyperparameter sequence of first-order methods (e.g., the step sizes in gradient descent) to quickly solve parametric convex optimization problems. Our computational architecture amounts to running fixed-point iterations where the hyperparameters are the same across all parametric instances and consists of two phases. In the first step-varying phase the hyperparameters vary across iterations, while in the second steady-state phase the hyperparameters are constant across iterations. Our learned optimizer is flexible in that it can be evaluated on any number of iterations and is guaranteed to converge to an optimal solution. To train, we minimize the mean square error to a ground truth solution. In the case of gradient descent, the one-step optimal step size is the solution to a least squares problem, and in the case of unconstrained quadratic minimization, we can compute the two and three-step optimal solutions in closed-form. In other cases, we backpropagate through the algorithm steps to minimize the training objective after a given number of steps. We show how to learn hyperparameters for several popular algorithms: gradient descent, proximal gradient descent, and two ADMM-based solvers: OSQP and SCS. We use a sample convergence bound to obtain generalization guarantees for the performance of our learned algorithm for unseen data, providing both lower and upper bounds. We showcase the effectiveness of our method with many examples, including ones from control, signal processing, and machine learning. Remarkably, our approach is highly data-efficient in that we only use $10$ problem instances to train the hyperparameters in all of our examples.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
510,738
2206.07989
Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination
The learned policy of model-free offline reinforcement learning (RL) methods is often constrained to stay within the support of datasets to avoid possible dangerous out-of-distribution actions or states, making it challenging to handle out-of-support region. Model-based RL methods offer a richer dataset and benefit generalization by generating imaginary trajectories with either trained forward or reverse dynamics model. However, the imagined transitions may be inaccurate, thus downgrading the performance of the underlying offline RL method. In this paper, we propose to augment the offline dataset by using trained bidirectional dynamics models and rollout policies with double check. We introduce conservatism by trusting samples that the forward model and backward model agree on. Our method, confidence-aware bidirectional offline model-based imagination, generates reliable samples and can be combined with any model-free offline RL method. Experimental results on the D4RL benchmarks demonstrate that our method significantly boosts the performance of existing model-free offline RL algorithms and achieves competitive or better scores against baseline methods.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
302,959
2305.00514
Discriminative Co-Saliency and Background Mining Transformer for Co-Salient Object Detection
Most previous co-salient object detection works mainly focus on extracting co-salient cues via mining the consistency relations across images while ignoring explicit exploration of background regions. In this paper, we propose a Discriminative co-saliency and background Mining Transformer framework (DMT) based on several economical multi-grained correlation modules to explicitly mine both co-saliency and background information and effectively model their discrimination. Specifically, we first propose a region-to-region correlation module for introducing inter-image relations to pixel-wise segmentation features while maintaining computational efficiency. Then, we use two types of pre-defined tokens to mine co-saliency and background information via our proposed contrast-induced pixel-to-token correlation and co-saliency token-to-token correlation modules. We also design a token-guided feature refinement module to enhance the discriminability of the segmentation features under the guidance of the learned tokens. We perform iterative mutual promotion for the segmentation feature extraction and token construction. Experimental results on three benchmark datasets demonstrate the effectiveness of our proposed method. The source code is available at: https://github.com/dragonlee258079/DMT.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
361,358
2104.14125
Hardware Architecture of Embedded Inference Accelerator and Analysis of Algorithms for Depthwise and Large-Kernel Convolutions
In order to handle modern convolutional neural networks (CNNs) efficiently, a hardware architecture of CNN inference accelerator is proposed to handle depthwise convolutions and regular convolutions, which are both essential building blocks for embedded-computer-vision algorithms. Different from related works, the proposed architecture can support filter kernels with different sizes with high flexibility since it does not require extra costs for intra-kernel parallelism, and it can generate convolution results faster than the architecture of the related works. The experimental results show the importance of supporting depthwise convolutions and dilated convolutions with the proposed hardware architecture. In addition to depthwise convolutions with large-kernels, a new structure called DDC layer, which includes the combination of depthwise convolutions and dilated convolutions, is also analyzed in this paper. For face detection, the computational costs decrease by 30%, and the model size decreases by 20% when the DDC layers are applied to the network. For image classification, the accuracy is increased by 1% by simply replacing $3 \times 3$ filters with $5 \times 5$ filters in depthwise convolutions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
232,724
1207.4766
Computer control of gene expression: Robust setpoint tracking of protein mean and variance using integral feedback
Protein mean and variance levels in a simple stochastic gene expression circuit are controlled using proportional integral feedback. It is shown that the protein mean level can be globally and robustly tracked to any desired value using a simple PI controller that satisfies explicit sufficient conditions. Controlling both the mean and variance on the other hand requires the use of an additional control input, chosen here as the mRNA degradation rate. Local robust tracking of mean and variance is proved to be achievable using multivariable PI control, provided that the reference point satisfies necessary conditions imposed by the system. Even more importantly, it is shown that there exist PI controllers that locally, robustly and simultaneously stabilize all the equilibrium points inside the admissible region. Simulation examples illustrate the results.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
17,667
2012.04839
Robust Domain Randomised Reinforcement Learning through Peer-to-Peer Distillation
In reinforcement learning, domain randomisation is an increasingly popular technique for learning more general policies that are robust to domain-shifts at deployment. However, naively aggregating information from randomised domains may lead to high variance in gradient estimation and unstable learning process. To address this issue, we present a peer-to-peer online distillation strategy for RL termed P2PDRL, where multiple workers are each assigned to a different environment, and exchange knowledge through mutual regularisation based on Kullback-Leibler divergence. Our experiments on continuous control tasks show that P2PDRL enables robust learning across a wider randomisation distribution than baselines, and more robust generalisation to new environments at testing.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
210,585
2405.19013
On Dissipativity of Cross-Entropy Loss in Training ResNets
The training of ResNets and neural ODEs can be formulated and analyzed from the perspective of optimal control. This paper proposes a dissipative formulation of the training of ResNets and neural ODEs for classification problems by including a variant of the cross-entropy as a regularization in the stage cost. Based on the dissipative formulation of the training, we prove that the trained ResNet exhibit the turnpike phenomenon. We then illustrate that the training exhibits the turnpike phenomenon by training on the two spirals and MNIST datasets. This can be used to find very shallow networks suitable for a given classification task.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
458,711
2405.17361
A One-Layer Decoder-Only Transformer is a Two-Layer RNN: With an Application to Certified Robustness
This paper reveals a key insight that a one-layer decoder-only Transformer is equivalent to a two-layer Recurrent Neural Network (RNN). Building on this insight, we propose ARC-Tran, a novel approach for verifying the robustness of decoder-only Transformers against arbitrary perturbation spaces. Compared to ARC-Tran, current robustness verification techniques are limited either to specific and length-preserving perturbations like word substitutions or to recursive models like LSTMs. ARC-Tran addresses these limitations by meticulously managing position encoding to prevent mismatches and by utilizing our key insight to achieve precise and scalable verification. Our evaluation shows that ARC-Tran (1) trains models more robust to arbitrary perturbation spaces than those produced by existing techniques and (2) shows high certification accuracy of the resulting models.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
457,870
1811.09735
A Multi-variable Stacked Long-Short Term Memory Network for Wind Speed Forecasting
Precisely forecasting wind speed is essential for wind power producers and grid operators. However, this task is challenging due to the stochasticity of wind speed. To accurately predict short-term wind speed under uncertainties, this paper proposed a multi-variable stacked LSTMs model (MSLSTM). The proposed method utilizes multiple historical meteorological variables, such as wind speed, temperature, humidity, pressure, dew point and solar radiation to accurately predict wind speeds. The prediction performance is extensively assessed using real data collected in West Texas, USA. The experimental results show that the proposed MSLSTM can preferably capture and learn uncertainties while output competitive performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
114,306
2405.15598
MCDFN: Supply Chain Demand Forecasting via an Explainable Multi-Channel Data Fusion Network Model
Accurate demand forecasting is crucial for optimizing supply chain management. Traditional methods often fail to capture complex patterns from seasonal variability and special events. Despite advancements in deep learning, interpretable forecasting models remain a challenge. To address this, we introduce the Multi-Channel Data Fusion Network (MCDFN), a hybrid architecture that integrates Convolutional Neural Networks (CNN), Long Short-Term Memory networks (LSTM), and Gated Recurrent Units (GRU) to enhance predictive performance by extracting spatial and temporal features from time series data. Our comparative benchmarking demonstrates that MCDFN outperforms seven other deep-learning models, achieving superior metrics: MSE (23.5738), RMSE (4.8553), MAE (3.9991), and MAPE (20.1575%). Additionally, MCDFN's predictions were statistically indistinguishable from actual values, confirmed by a paired t-test with a 5% p-value and a 10-fold cross-validated statistical paired t-test. We apply explainable AI techniques like ShapTime and Permutation Feature Importance to enhance interpretability. This research advances demand forecasting methodologies and offers practical guidelines for integrating MCDFN into supply chain systems, highlighting future research directions for scalability and user-friendly deployment.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
457,010
2407.03515
Feature-Specific Coefficients of Determination in Tree Ensembles
Tree ensemble methods provide promising predictions with models difficult to interpret. Recent introduction of Shapley values for individualized feature contributions, accompanied with several fast computing algorithms for predicted values, shows intriguing results. However, individualizing coefficients of determination, aka $R^2$, for each feature is challenged by the underlying quadratic losses, although these coefficients allow us to comparatively assess single feature's contribution to tree ensembles. Here we propose an efficient algorithm, Q-SHAP, that reduces the computational complexity to polynomial time when calculating Shapley values related to quadratic losses. Our extensive simulation studies demonstrate that this approach not only enhances computational efficiency but also improves estimation accuracy of feature-specific coefficients of determination.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
470,177
1410.0461
Vehicle Parameter Independent Gain Matrix Selection for a Quadrotor using State-Space Controller Design Methods
With quadrotor use seeing extensive growth in recent years, the autonomous control of these Unmanned Aerial Vehicles (UAVs) is an increasing relevant and intersting field. In this paper a linear state-space approach at designing a stable hover controller in the presence of disturbances is presented along with simulation of control system performance. Additionally the design of a tracking system, for linear inertial position and yaw, is presented with simulation results. The gain matrix developed for this control system is independent of the specific quadrotor parameters, meaning that this same gain matrix can be used on a wide variety of quadrotors without modification. The hover and tracking controllers designed in this paper proved to perform well in simulation under perturbation disturbances and normally distributed disturbances on the UAVs linear speeds and angular speeds.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
36,471
1402.5461
Preliminary Studies on Force/Motion Control of Intelligent Mechanical Systems
To rationalize the relatively high investment that industrial automation systems entail, research in the field of intelligent machines should target high value functions such as fettling, die-finishing, deburring, and fixtureless manufacturing. For achieving this goal, past work has concentrated on force control algorithms at the system level with limited focus on performance expansion at the actuator level. We present a comprehensive literature review on robot force control, including algorithms, specialized actuators, and robot control software. A robot force control testbed was developed using Schunk's PowerCube 6-DOF Arm and a six-axis ATI force/torque sensor. Using parameter identification experiments, manipulator module inertias and the motor torque constant were estimated. Experiments were conducted to study the practical issues involved in implementing stable contact transitions and programmable endpoint impedance. Applications to human augmentation, virtual fixtures, and teleoperation are discussed. These experiments are used as a vehicle to understand the performance improvement achievable at the actuator level. The approach at UTRRG has been to maximize the choices within the actuator to enhance its intelligence. Drawing on this 20-year research history in electromechanical actuator architecture, we propose a new concept that mixes two inputs, distinct in their velocity ratios, within the same dual actuator called a Force/Motion Actuator (FMA). Detailed kinematic and dynamic models of this dual actuator are developed. The actuator performance is evaluated using simulations with an output velocity specification and resolving input trajectories using a minimum-norm solution. It is shown that a design choice of 14:1 motion scaling between the two inputs results in good sensitivity to output force disturbances without compromising velocity tracking performance.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
31,061
2408.05803
Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI
Automated breast tumor segmentation on the basis of dynamic contrast-enhancement magnetic resonance imaging (DCE-MRI) has shown great promise in clinical practice, particularly for identifying the presence of breast disease. However, accurate segmentation of breast tumor is a challenging task, often necessitating the development of complex networks. To strike an optimal trade-off between computational costs and segmentation performance, we propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers. Specifically, the hybrid network consists of a encoder-decoder architecture by stacking convolution and decovolution layers. Effective 3D transformer layers are then implemented after the encoder subnetworks, to capture global dependencies between the bottleneck features. To improve the efficiency of hybrid network, two parallel encoder subnetworks are designed for the decoder and the transformer layers, respectively. To further enhance the discriminative capability of hybrid network, a prototype learning guided prediction module is proposed, where the category-specified prototypical features are calculated through on-line clustering. All learned prototypical features are finally combined with the features from decoder for tumor mask prediction. The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network achieves superior performance than the state-of-the-art (SOTA) methods, while maintaining balance between segmentation accuracy and computation cost. Moreover, we demonstrate that automatically generated tumor masks can be effectively applied to identify HER2-positive subtype from HER2-negative subtype with the similar accuracy to the analysis based on manual tumor segmentation. The source code is available at https://github.com/ZhouL-lab/PLHN.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
479,947
2207.14811
StyleLight: HDR Panorama Generation for Lighting Estimation and Editing
We present a new lighting estimation and editing framework to generate high-dynamic-range (HDR) indoor panorama lighting from a single limited field-of-view (LFOV) image captured by low-dynamic-range (LDR) cameras. Existing lighting estimation methods either directly regress lighting representation parameters or decompose this problem into LFOV-to-panorama and LDR-to-HDR lighting generation sub-tasks. However, due to the partial observation, the high-dynamic-range lighting, and the intrinsic ambiguity of a scene, lighting estimation remains a challenging task. To tackle this problem, we propose a coupled dual-StyleGAN panorama synthesis network (StyleLight) that integrates LDR and HDR panorama synthesis into a unified framework. The LDR and HDR panorama synthesis share a similar generator but have separate discriminators. During inference, given an LDR LFOV image, we propose a focal-masked GAN inversion method to find its latent code by the LDR panorama synthesis branch and then synthesize the HDR panorama by the HDR panorama synthesis branch. StyleLight takes LFOV-to-panorama and LDR-to-HDR lighting generation into a unified framework and thus greatly improves lighting estimation. Extensive experiments demonstrate that our framework achieves superior performance over state-of-the-art methods on indoor lighting estimation. Notably, StyleLight also enables intuitive lighting editing on indoor HDR panoramas, which is suitable for real-world applications. Code is available at https://style-light.github.io.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
310,713
2304.14115
Inferring Preferences from Demonstrations in Multi-objective Reinforcement Learning: A Dynamic Weight-based Approach
Many decision-making problems feature multiple objectives. In such problems, it is not always possible to know the preferences of a decision-maker for different objectives. However, it is often possible to observe the behavior of decision-makers. In multi-objective decision-making, preference inference is the process of inferring the preferences of a decision-maker for different objectives. This research proposes a Dynamic Weight-based Preference Inference (DWPI) algorithm that can infer the preferences of agents acting in multi-objective decision-making problems, based on observed behavior trajectories in the environment. The proposed method is evaluated on three multi-objective Markov decision processes: Deep Sea Treasure, Traffic, and Item Gathering. The performance of the proposed DWPI approach is compared to two existing preference inference methods from the literature, and empirical results demonstrate significant improvements compared to the baseline algorithms, in terms of both time requirements and accuracy of the inferred preferences. The Dynamic Weight-based Preference Inference algorithm also maintains its performance when inferring preferences for sub-optimal behavior demonstrations. In addition to its impressive performance, the Dynamic Weight-based Preference Inference algorithm does not require any interactions during training with the agent whose preferences are inferred, all that is required is a trajectory of observed behavior.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
360,831
2206.09403
MME-CRS: Multi-Metric Evaluation Based on Correlation Re-Scaling for Evaluating Open-Domain Dialogue
Automatic open-domain dialogue evaluation is a crucial component of dialogue systems. Recently, learning-based evaluation metrics have achieved state-of-the-art performance in open-domain dialogue evaluation. However, these metrics, which only focus on a few qualities, are hard to evaluate dialogue comprehensively. Furthermore, these metrics lack an effective score composition approach for diverse evaluation qualities. To address the above problems, we propose a Multi-Metric Evaluation based on Correlation Re-Scaling (MME-CRS) for evaluating open-domain dialogue. Firstly, we build an evaluation metric composed of 5 groups of parallel sub-metrics called Multi-Metric Evaluation (MME) to evaluate the quality of dialogue comprehensively. Furthermore, we propose a novel score composition method called Correlation Re-Scaling (CRS) to model the relationship between sub-metrics and diverse qualities. Our approach MME-CRS ranks first on the final test data of DSTC10 track5 subtask1 Automatic Open-domain Dialogue Evaluation Challenge with a large margin, which proved the effectiveness of our proposed approach.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
303,559
1306.0095
Universal Induction with Varying Sets of Combinators
Universal induction is a crucial issue in AGI. Its practical applicability can be achieved by the choice of the reference machine or representation of algorithms agreed with the environment. This machine should be updatable for solving subsequent tasks more efficiently. We study this problem on an example of combinatory logic as the very simple Turing-complete reference machine, which enables modifying program representations by introducing different sets of primitive combinators. Genetic programming system is used to search for combinator expressions, which are easily decomposed into sub-expressions being recombined in crossover. Our experiments show that low-complexity induction or prediction tasks can be solved by the developed system (much more efficiently than using brute force); useful combinators can be revealed and included into the representation simplifying more difficult tasks. However, optimal sets of combinators depend on the specific task, so the reference machine should be adaptively chosen in coordination with the search engine.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
24,927
2207.09785
Unsupervised energy disaggregation via convolutional sparse coding
In this work, a method for unsupervised energy disaggregation in private households equipped with smart meters is proposed. This method aims to classify power consumption as active or passive, granting the ability to report on the residents' activity and presence without direct interaction. This lays the foundation for applications like non-intrusive health monitoring of private homes. The proposed method is based on minimizing a suitable energy functional, for which the iPALM (inertial proximal alternating linearized minimization) algorithm is employed, demonstrating that various conditions guaranteeing convergence are satisfied. In order to confirm feasibility of the proposed method, experiments on semi-synthetic test data sets and a comparison to existing, supervised methods are provided.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
309,028
2312.10369
Proportional Representation in Metric Spaces and Low-Distortion Committee Selection
We introduce a novel definition for a small set R of k points being "representative" of a larger set in a metric space. Given a set V (e.g., documents or voters) to represent, and a set C of possible representatives, our criterion requires that for any subset S comprising a theta fraction of V, the average distance of S to their best theta*k points in R should not be more than a factor gamma compared to their average distance to the best theta*k points among all of C. This definition is a strengthening of proportional fairness and core fairness, but - different from those notions - requires that large cohesive clusters be represented proportionally to their size. Since there are instances for which - unless gamma is polynomially large - no solutions exist, we study this notion in a resource augmentation framework, implicitly stating the constraints for a set R of size k as though its size were only k/alpha, for alpha > 1. Furthermore, motivated by the application to elections, we mostly focus on the "ordinal" model, where the algorithm does not learn the actual distances; instead, it learns only for each point v in V and each candidate pairs c, c' which of c, c' is closer to v. Our main result is that the Expanding Approvals Rule (EAR) of Aziz and Lee is (alpha, gamma) representative with gamma <= 1 + 6.71 * (alpha)/(alpha-1). Our results lead to three notable byproducts. First, we show that the EAR achieves constant proportional fairness in the ordinal model, giving the first positive result on metric proportional fairness with ordinal information. Second, we show that for the core fairness objective, the EAR achieves the same asymptotic tradeoff between resource augmentation and approximation as the recent results of Li et al., which used full knowledge of the metric. Finally, our results imply a very simple single-winner voting rule with metric distortion at most 44.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
416,133
2003.03293
Probability Weighted Compact Feature for Domain Adaptive Retrieval
Domain adaptive image retrieval includes single-domain retrieval and cross-domain retrieval. Most of the existing image retrieval methods only focus on single-domain retrieval, which assumes that the distributions of retrieval databases and queries are similar. However, in practical application, the discrepancies between retrieval databases often taken in ideal illumination/pose/background/camera conditions and queries usually obtained in uncontrolled conditions are very large. In this paper, considering the practical application, we focus on challenging cross-domain retrieval. To address the problem, we propose an effective method named Probability Weighted Compact Feature Learning (PWCF), which provides inter-domain correlation guidance to promote cross-domain retrieval accuracy and learns a series of compact binary codes to improve the retrieval speed. First, we derive our loss function through the Maximum A Posteriori Estimation (MAP): Bayesian Perspective (BP) induced focal-triplet loss, BP induced quantization loss and BP induced classification loss. Second, we propose a common manifold structure between domains to explore the potential correlation across domains. Considering the original feature representation is biased due to the inter-domain discrepancy, the manifold structure is difficult to be constructed. Therefore, we propose a new feature named Histogram Feature of Neighbors (HFON) from the sample statistics perspective. Extensive experiments on various benchmark databases validate that our method outperforms many state-of-the-art image retrieval methods for domain adaptive image retrieval. The source code is available at https://github.com/fuxianghuang1/PWCF
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
167,184
2212.08302
Safe Evaluation For Offline Learning: Are We Ready To Deploy?
The world currently offers an abundance of data in multiple domains, from which we can learn reinforcement learning (RL) policies without further interaction with the environment. RL agents learning offline from such data is possible but deploying them while learning might be dangerous in domains where safety is critical. Therefore, it is essential to find a way to estimate how a newly-learned agent will perform if deployed in the target environment before actually deploying it and without the risk of overestimating its true performance. To achieve this, we introduce a framework for safe evaluation of offline learning using approximate high-confidence off-policy evaluation (HCOPE) to estimate the performance of offline policies during learning. In our setting, we assume a source of data, which we split into a train-set, to learn an offline policy, and a test-set, to estimate a lower-bound on the offline policy using off-policy evaluation with bootstrapping. A lower-bound estimate tells us how good a newly-learned target policy would perform before it is deployed in the real environment, and therefore allows us to decide when to deploy our learned policy.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
336,697
2003.06693
Certified Defenses for Adversarial Patches
Adversarial patch attacks are among one of the most practical threat models against real-world computer vision systems. This paper studies certified and empirical defenses against patch attacks. We begin with a set of experiments showing that most existing defenses, which work by pre-processing input images to mitigate adversarial patches, are easily broken by simple white-box adversaries. Motivated by this finding, we propose the first certified defense against patch attacks, and propose faster methods for its training. Furthermore, we experiment with different patch shapes for testing, obtaining surprisingly good robustness transfer across shapes, and present preliminary results on certified defense against sparse attacks. Our complete implementation can be found on: https://github.com/Ping-C/certifiedpatchdefense.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
168,194
1912.02545
Fine-Grained Emotion Classification of Chinese Microblogs Based on Graph Convolution Networks
Microblogs are widely used to express people's opinions and feelings in daily life. Sentiment analysis (SA) can timely detect personal sentiment polarities through analyzing text. Deep learning approaches have been broadly used in SA but still have not fully exploited syntax information. In this paper, we propose a syntax-based graph convolution network (GCN) model to enhance the understanding of diverse grammatical structures of Chinese microblogs. In addition, a pooling method based on percentile is proposed to improve the accuracy of the model. In experiments, for Chinese microblogs emotion classification categories including happiness, sadness, like, anger, disgust, fear, and surprise, the F-measure of our model reaches 82.32% and exceeds the state-of-the-art algorithm by 5.90%. The experimental results show that our model can effectively utilize the information of dependency parsing to improve the performance of emotion detection. What is more, we annotate a new dataset for Chinese emotion classification, which is open to other researchers.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
156,378
2405.02850
Halfway Escape Optimization: A Quantum-Inspired Solution for General Optimization Problems
This paper first proposes the Halfway Escape Optimization (HEO) algorithm, a quantum-inspired metaheuristic designed to address general optimization problems. The HEO mimics the effects between quantum such as tunneling, entanglement. After the introduction to the HEO mechansims, the study presents a comprehensive evaluation of HEO's performance against extensively-used optimization algorithms, including Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Artificial Fish Swarm Algorithm (AFSA), Grey Wolf Optimizer (GWO), and Quantum behaved Particle Swarm Optimization (QPSO). The primary analysis encompasses 14 benchmark functions with dimension 30, demonstrating HEO's effectiveness and adaptability in navigating general optimization problems. The test of HEO in Pressure Vessel Design and Tubular Column Design also infers its feasibility and potential in real-time applications. Further validation of HEO in Osmancik-97 and Cammeo Rice Classification achieves a higher accuracy record.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
451,954
2403.17704
Prioritize Team Actions: Multi-Agent Temporal Logic Task Planning with Ordering Constraints
In this paper, we investigate the problem of linear temporal logic (LTL) path planning for multi-agent systems, introducing the new concept of \emph{ordering constraints}. Specifically, we consider a generic objective function that is defined for the path of each individual agent. The primary objective is to find a global plan for the team of agents, ensuring they collectively meet the specified LTL requirements. Simultaneously, we aim to maintain a pre-determined order in the values of the objective function for each agent, which we refer to as the ordering constraints. This new requirement stems from scenarios like security-aware planning, where relative orders outweigh absolute values in importance. We present an efficient algorithm to solve this problem, supported by proofs of correctness that demonstrate the optimality of our solution. Additionally, we provide a case study in security-aware path planning to illustrate the practicality and effectiveness of our proposed approach.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
441,583
2410.08361
Upper Bounds for Learning in Reproducing Kernel Hilbert Spaces for Non IID Samples
In this paper, we study a Markov chain-based stochastic gradient algorithm in general Hilbert spaces, aiming to approximate the optimal solution of a quadratic loss function. We establish probabilistic upper bounds on its convergence. We further extend these results to an online regularized learning algorithm in reproducing kernel Hilbert spaces, where the samples are drawn along a Markov chain trajectory hence the samples are of the non i.i.d. type.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
497,076
2002.09577
Emulating duration and curvature of coral snake anti-predator thrashing behaviors using a soft-robotic platform
This paper presents a soft-robotic platform for exploring the ecological relevance of non-locomotory movements via animal-robot interactions. Coral snakes (genus Micrurus) and their mimics use vigorous, non-locomotory, and arrhythmic thrashing to deter predation. There is variation across snake species in the duration and curvature of anti-predator thrashes, and it is unclear how these aspects of motion interact to contribute to snake survival. In this work, soft robots composed of fiber-reinforced elastomeric enclosures (FREEs) are developed to emulate the anti-predator behaviors of three genera of snake. Curvature and duration of motion are estimated for both live snakes and robots, providing a quantitative assessment of the robots' ability to emulate snake poses. The curvature values of the fabricated soft-robotic head, midsection, and tail segments are found to overlap with those exhibited by live snakes. Soft robot motion durations were less than or equal to those of snakes for all three genera. Additionally, combinations of segments were selected to emulate three specific snake genera with distinct anti-predatory behavior, producing curvature values that aligned well with live snake observations.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
165,104
1710.06564
Replacement AutoEncoder: A Privacy-Preserving Algorithm for Sensory Data Analysis
An increasing number of sensors on mobile, Internet of things (IoT), and wearable devices generate time-series measurements of physical activities. Though access to the sensory data is critical to the success of many beneficial applications such as health monitoring or activity recognition, a wide range of potentially sensitive information about the individuals can also be discovered through access to sensory data and this cannot easily be protected using traditional privacy approaches. In this paper, we propose a privacy-preserving sensing framework for managing access to time-series data in order to provide utility while protecting individuals' privacy. We introduce Replacement AutoEncoder, a novel algorithm which learns how to transform discriminative features of data that correspond to sensitive inferences, into some features that have been more observed in non-sensitive inferences, to protect users' privacy. This efficiency is achieved by defining a user-customized objective function for deep autoencoders. Our replacement method will not only eliminate the possibility of recognizing sensitive inferences, it also eliminates the possibility of detecting the occurrence of them. That is the main weakness of other approaches such as filtering or randomization. We evaluate the efficacy of the algorithm with an activity recognition task in a multi-sensing environment using extensive experiments on three benchmark datasets. We show that it can retain the recognition accuracy of state-of-the-art techniques while simultaneously preserving the privacy of sensitive information. Finally, we utilize the GANs for detecting the occurrence of replacement, after releasing data, and show that this can be done only if the adversarial network is trained on the users' original data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
82,797
1902.03427
Low-pass filtering as Bayesian inference
We propose a Bayesian nonparametric method for low-pass filtering that can naturally handle unevenly-sampled and noise-corrupted observations. The proposed model is constructed as a latent-factor model for time series, where the latent factors are Gaussian processes with non-overlapping spectra. With this construction, the low-pass version of the time series can be identified as the low-frequency latent component, and therefore it can be found by means of Bayesian inference. We show that the model admits exact training and can be implemented with minimal numerical approximations. Finally, the proposed model is validated against standard linear filters on synthetic and real-world time series.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
121,102
2311.18695
Seg2Reg: Differentiable 2D Segmentation to 1D Regression Rendering for 360 Room Layout Reconstruction
State-of-the-art single-view 360-degree room layout reconstruction methods formulate the problem as a high-level 1D (per-column) regression task. On the other hand, traditional low-level 2D layout segmentation is simpler to learn and can represent occluded regions, but it requires complex post-processing for the targeting layout polygon and sacrifices accuracy. We present Seg2Reg to render 1D layout depth regression from the 2D segmentation map in a differentiable and occlusion-aware way, marrying the merits of both sides. Specifically, our model predicts floor-plan density for the input equirectangular 360-degree image. Formulating the 2D layout representation as a density field enables us to employ `flattened' volume rendering to form 1D layout depth regression. In addition, we propose a novel 3D warping augmentation on layout to improve generalization. Finally, we re-implement recent room layout reconstruction methods into our codebase for benchmarking and explore modern backbones and training techniques to serve as the strong baseline. Our model significantly outperforms previous arts. The code will be made available upon publication.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
411,778
cs/0007003
Using compression to identify acronyms in text
Text mining is about looking for patterns in natural language text, and may be defined as the process of analyzing text to extract information from it for particular purposes. In previous work, we claimed that compression is a key technology for text mining, and backed this up with a study that showed how particular kinds of lexical tokens---names, dates, locations, etc.---can be identified and located in running text, using compression models to provide the leverage necessary to distinguish different token types (Witten et al., 1999)
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
537,150
2008.08226
Augmenting Neural Differential Equations to Model Unknown Dynamical Systems with Incomplete State Information
Neural Ordinary Differential Equations replace the right-hand side of a conventional ODE with a neural net, which by virtue of the universal approximation theorem, can be trained to the representation of any function. When we do not know the function itself, but have state trajectories (time evolution) of the ODE system we can still train the neural net to learn the representation of the underlying but unknown ODE. However if the state of the system is incompletely known then the right-hand side of the ODE cannot be calculated. The derivatives to propagate the system are unavailable. We show that a specially augmented Neural ODE can learn the system when given incomplete state information. As a worked example we apply neural ODEs to the Lotka-Voltera problem of 3 species, rabbits, wolves, and bears. We show that even when the data for the bear time series is removed the remaining time series of the rabbits and wolves is sufficient to learn the dynamical system despite the missing the incomplete state information. This is surprising since a conventional ODE system cannot output the correct derivatives without the full state as the input. We implement augmented neural ODEs and differential equation solvers in the julia programming language.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
192,354
2002.04279
Testing of Support Tools for Plagiarism Detection
There is a general belief that software must be able to easily do things that humans find difficult. Since finding sources for plagiarism in a text is not an easy task, there is a wide-spread expectation that it must be simple for software to determine if a text is plagiarized or not. Software cannot determine plagiarism, but it can work as a support tool for identifying some text similarity that may constitute plagiarism. But how well do the various systems work? This paper reports on a collaborative test of 15 web-based text-matching systems that can be used when plagiarism is suspected. It was conducted by researchers from seven countries using test material in eight different languages, evaluating the effectiveness of the systems on single-source and multi-source documents. A usability examination was also performed. The sobering results show that although some systems can indeed help identify some plagiarized content, they clearly do not find all plagiarism and at times also identify non-plagiarized material as problematic.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
163,567
2406.06046
MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models
Pretraining data selection has the potential to improve language model pretraining efficiency by utilizing higher-quality data from massive web data corpora. Current data selection methods, which rely on either hand-crafted rules or larger reference models, are conducted statically and do not capture the evolving data preferences during pretraining. In this paper, we introduce model-aware data selection with data influence models (MATES), where a data influence model continuously adapts to the evolving data preferences of the pretraining model and then selects the data most effective for the current pretraining progress. Specifically, we collect oracle data influence by locally probing the pretraining model and fine-tune a small data influence model to approximate it accurately. The data influence model then predicts data influence over the whole pretraining corpus and selects the most influential data for the next pretraining stage. Experiments of pretraining 410M and 1B models on the C4 dataset demonstrate that MATES significantly outperforms random data selection on extensive downstream tasks. It doubles the gains achieved by the state-of-the-art data selection approach that leverages larger reference models and reduces the total FLOPs required to reach certain performances by half. Further analyses validate the effectiveness of the locally probed oracle data influence and the approximation with data influence models. Our code is open-sourced at https://github.com/cxcscmu/MATES.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
462,416
2211.14632
Why Neural Networks Work
We argue that many properties of fully-connected feedforward neural networks (FCNNs), also called multi-layer perceptrons (MLPs), are explainable from the analysis of a single pair of operations, namely a random projection into a higher-dimensional space than the input, followed by a sparsification operation. For convenience, we call this pair of successive operations expand-and-sparsify following the terminology of Dasgupta. We show how expand-and-sparsify can explain the observed phenomena that have been discussed in the literature, such as the so-called Lottery Ticket Hypothesis, the surprisingly good performance of randomly-initialized untrained neural networks, the efficacy of Dropout in training and most importantly, the mysterious generalization ability of overparameterized models, first highlighted by Zhang et al. and subsequently identified even in non-neural network models by Belkin et al.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
true
false
false
332,922
1607.08811
Can a CNN Recognize Catalan Diet?
Nowadays, we can find several diseases related to the unhealthy diet habits of the population, such as diabetes, obesity, anemia, bulimia and anorexia. In many cases, these diseases are related to the food consumption of people. Mediterranean diet is scientifically known as a healthy diet that helps to prevent many metabolic diseases. In particular, our work focuses on the recognition of Mediterranean food and dishes. The development of this methodology would allow to analise the daily habits of users with wearable cameras, within the topic of lifelogging. By using automatic mechanisms we could build an objective tool for the analysis of the patient's behaviour, allowing specialists to discover unhealthy food patterns and understand the user's lifestyle. With the aim to automatically recognize a complete diet, we introduce a challenging multi-labeled dataset related to Mediterranean diet called FoodCAT. The first type of label provided consists of 115 food classes with an average of 400 images per dish, and the second one consists of 12 food categories with an average of 3800 pictures per class. This dataset will serve as a basis for the development of automatic diet recognition. In this context, deep learning and more specifically, Convolutional Neural Networks (CNNs), currently are state-of-the-art methods for automatic food recognition. In our work, we compare several architectures for image classification, with the purpose of diet recognition. Applying the best model for recognising food categories, we achieve a top-1 accuracy of 72.29\%, and top-5 of 97.07\%. In a complete diet recognition of dishes from Mediterranean diet, enlarged with the Food-101 dataset for international dishes recognition, we achieve a top-1 accuracy of 68.07\%, and top-5 of 89.53\%, for a total of 115+101 food classes.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
59,204
2002.04138
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
Recent work has shown great promise in explaining neural network behavior. In particular, feature attribution methods explain which features were most important to a model's prediction on a given input. However, for many tasks, simply knowing which features were important to a model's prediction may not provide enough insight to understand model behavior. The interactions between features within the model may better help us understand not only the model, but also why certain features are more important than others. In this work, we present Integrated Hessians, an extension of Integrated Gradients that explains pairwise feature interactions in neural networks. Integrated Hessians overcomes several theoretical limitations of previous methods to explain interactions, and unlike such previous methods is not limited to a specific architecture or class of neural network. Additionally, we find that our method is faster than existing methods when the number of features is large, and outperforms previous methods on existing quantitative benchmarks. Code available at https://github.com/suinleelab/path_explain
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
163,519
1407.4451
Reconstructing propagation networks with natural diversity and identifying hidden sources
Our ability to uncover complex network structure and dynamics from data is fundamental to understanding and controlling collective dynamics in complex systems. Despite recent progress in this area, reconstructing networks with stochastic dynamical processes from limited time series remains to be an outstanding problem. Here we develop a framework based on compressed sensing to reconstruct complex networks on which stochastic spreading dynamics take place. We apply the methodology to a large number of model and real networks, finding that a full reconstruction of inhomogeneous interactions can be achieved from small amounts of polarized (binary) data, a virtue of compressed sensing. Further, we demonstrate that a hidden source that triggers the spreading process but is externally inaccessible can be ascertained and located with high confidence in the absence of direct routes of propagation from it. Our approach thus establishes a paradigm for tracing and controlling epidemic invasion and information diffusion in complex networked systems.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
34,706
2010.10566
Better Highlighting: Creating Sub-Sentence Summary Highlights
Amongst the best means to summarize is highlighting. In this paper, we aim to generate summary highlights to be overlaid on the original documents to make it easier for readers to sift through a large amount of text. The method allows summaries to be understood in context to prevent a summarizer from distorting the original meaning, of which abstractive summarizers usually fall short. In particular, we present a new method to produce self-contained highlights that are understandable on their own to avoid confusion. Our method combines determinantal point processes and deep contextualized representations to identify an optimal set of sub-sentence segments that are both important and non-redundant to form summary highlights. To demonstrate the flexibility and modeling power of our method, we conduct extensive experiments on summarization datasets. Our analysis provides evidence that highlighting is a promising avenue of research towards future summarization.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
201,914
2402.12319
Dynamic Environment Responsive Online Meta-Learning with Fairness Awareness
The fairness-aware online learning framework has emerged as a potent tool within the context of continuous lifelong learning. In this scenario, the learner's objective is to progressively acquire new tasks as they arrive over time, while also guaranteeing statistical parity among various protected sub-populations, such as race and gender, when it comes to the newly introduced tasks. A significant limitation of current approaches lies in their heavy reliance on the i.i.d (independent and identically distributed) assumption concerning data, leading to a static regret analysis of the framework. Nevertheless, it's crucial to note that achieving low static regret does not necessarily translate to strong performance in dynamic environments characterized by tasks sampled from diverse distributions. In this paper, to tackle the fairness-aware online learning challenge in evolving settings, we introduce a unique regret measure, FairSAR, by incorporating long-term fairness constraints into a strongly adapted loss regret framework. Moreover, to determine an optimal model parameter at each time step, we introduce an innovative adaptive fairness-aware online meta-learning algorithm, referred to as FairSAOML. This algorithm possesses the ability to adjust to dynamic environments by effectively managing bias control and model accuracy. The problem is framed as a bi-level convex-concave optimization, considering both the model's primal and dual parameters, which pertain to its accuracy and fairness attributes, respectively. Theoretical analysis yields sub-linear upper bounds for both loss regret and the cumulative violation of fairness constraints. Our experimental evaluation on various real-world datasets in dynamic environments demonstrates that our proposed FairSAOML algorithm consistently outperforms alternative approaches rooted in the most advanced prior online learning methods.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
false
430,800
1705.10814
Character Composition Model with Convolutional Neural Networks for Dependency Parsing on Morphologically Rich Languages
We present a transition-based dependency parser that uses a convolutional neural network to compose word representations from characters. The character composition model shows great improvement over the word-lookup model, especially for parsing agglutinative languages. These improvements are even better than using pre-trained word embeddings from extra data. On the SPMRL data sets, our system outperforms the previous best greedy parser (Ballesteros et al., 2015) by a margin of 3% on average.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
74,473
1910.01837
Enriching Visual with Verbal Explanations for Relational Concepts -- Combining LIME with Aleph
With the increasing number of deep learning applications, there is a growing demand for explanations. Visual explanations provide information about which parts of an image are relevant for a classifier's decision. However, highlighting of image parts (e.g., an eye) cannot capture the relevance of a specific feature value for a class (e.g., that the eye is wide open). Furthermore, highlighting cannot convey whether the classification depends on the mere presence of parts or on a specific spatial relation between them. Consequently, we present an approach that is capable of explaining a classifier's decision in terms of logic rules obtained by the Inductive Logic Programming system Aleph. The examples and the background knowledge needed for Aleph are based on the explanation generation method LIME. We demonstrate our approach with images of a blocksworld domain. First, we show that our approach is capable of identifying a single relation as important explanatory construct. Afterwards, we present the more complex relational concept of towers. Finally, we show how the generated relational rules can be explicitly related with the input image, resulting in richer explanations.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
148,051
2307.03167
Risk-Averse Trajectory Optimization via Sample Average Approximation
Trajectory optimization under uncertainty underpins a wide range of applications in robotics. However, existing methods are limited in terms of reasoning about sources of epistemic and aleatoric uncertainty, space and time correlations, nonlinear dynamics, and non-convex constraints. In this work, we first introduce a continuous-time planning formulation with an average-value-at-risk constraint over the entire planning horizon. Then, we propose a sample-based approximation that unlocks an efficient and general-purpose algorithm for risk-averse trajectory optimization. We prove that the method is asymptotically optimal and derive finite-sample error bounds. Simulations demonstrate the high speed and reliability of the approach on problems with stochasticity in nonlinear dynamics, obstacle fields, interactions, and terrain parameters.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
377,945
2204.03141
Adversarial Machine Learning Attacks Against Video Anomaly Detection Systems
Anomaly detection in videos is an important computer vision problem with various applications including automated video surveillance. Although adversarial attacks on image understanding models have been heavily investigated, there is not much work on adversarial machine learning targeting video understanding models and no previous work which focuses on video anomaly detection. To this end, we investigate an adversarial machine learning attack against video anomaly detection systems, that can be implemented via an easy-to-perform cyber-attack. Since surveillance cameras are usually connected to the server running the anomaly detection model through a wireless network, they are prone to cyber-attacks targeting the wireless connection. We demonstrate how Wi-Fi deauthentication attack, a notoriously easy-to-perform and effective denial-of-service (DoS) attack, can be utilized to generate adversarial data for video anomaly detection systems. Specifically, we apply several effects caused by the Wi-Fi deauthentication attack on video quality (e.g., slow down, freeze, fast forward, low resolution) to the popular benchmark datasets for video anomaly detection. Our experiments with several state-of-the-art anomaly detection models show that the attackers can significantly undermine the reliability of video anomaly detection systems by causing frequent false alarms and hiding physical anomalies from the surveillance system.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
290,203
2103.16349
Historical Inertia: A Neglected but Powerful Baseline for Long Sequence Time-series Forecasting
Long sequence time-series forecasting (LSTF) has become increasingly popular for its wide range of applications. Though superior models have been proposed to enhance the prediction effectiveness and efficiency, it is reckless to neglect or underestimate one of the most natural and basic temporal properties of time-series. In this paper, we introduce a new baseline for LSTF, the historical inertia (HI), which refers to the most recent historical data-points in the input time series. We experimentally evaluate the power of historical inertia on four public real-word datasets. The results demonstrate that up to 82\% relative improvement over state-of-the-art works can be achieved even by adopting HI directly as output.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
227,552
1909.06174
Petri Net Machines for Human-Agent Interaction
Smart speakers and robots become ever more prevalent in our daily lives. These agents are able to execute a wide range of tasks and actions and, therefore, need systems to control their execution. Current state-of-the-art such as (deep) reinforcement learning, however, requires vast amounts of data for training which is often hard to come by when interacting with humans. To overcome this issue, most systems still rely on Finite State Machines. We introduce Petri Net Machines which present a formal definition for state machines based on Petri Nets that are able to execute concurrent actions reliably, execute and interleave several plans at the same time, and provide an easy to use modelling language. We show their workings based on the example of Human-Robot Interaction in a shopping mall.
true
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
145,311
2403.03188
Towards Democratized Flood Risk Management: An Advanced AI Assistant Enabled by GPT-4 for Enhanced Interpretability and Public Engagement
Real-time flood forecasting is vital for effective emergency responses, but bridging the gap between complex numerical models and practical decision-making remains challenging. Decision-makers often rely on experts, while the public struggles to interpret flood risk information. To address this, we developed a customized AI Assistant powered by GPT-4. This tool enhances communication between decision-makers, the public, and forecasters, requiring no specialized knowledge. The framework leverages GPT-4's advanced natural language capabilities to search flood alerts, answer inquiries, and integrate real-time warnings with flood maps and social vulnerability data. It simplifies complex flood zone information into actionable advice. The prototype was evaluated on relevance, error resilience, and contextual understanding, with performance compared across different GPT models. This research advances flood risk management by making critical information more accessible and engaging, demonstrating the potential of AI tools like GPT-4 in addressing social and environmental challenges.
true
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
435,099
2409.08577
Exploring Remote Collaboration: The Impact of Avatar Representation on Dyadic Haptic Interactions in Shared Virtual Environments
This study is the first to explore the interplay between haptic interaction and avatar representation in Shared Virtual Environments (SVEs). We focus on their combined effect on social presence and task-related scores in dyadic collaborations. In a series of experiments, participants performed the plate control task with haptic interaction under four avatar representation conditions: avatars of both participant and partner were displayed, only the participant's avatar was displayed, only the partner's avatar was displayed, and no avatars were displayed. The study finds that avatar representation, especially of the partner, significantly enhances the perception of social presence, which haptic interaction alone does not fully achieve. In contrast, neither the presence nor the type of avatar representation impacts the task performance or participants' force effort of the task, suggesting that haptic interaction provides sufficient interaction cues for the execution of the task. These results underscore the significance of integrating both visual and haptic modalities to optimize remote collaboration experiences in virtual environments, ensuring effective communication and a strong sense of social presence.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
487,967
2405.07773
Human-Modeling in Sequential Decision-Making: An Analysis through the Lens of Human-Aware AI
"Human-aware" has become a popular keyword used to describe a particular class of AI systems that are designed to work and interact with humans. While there exists a surprising level of consistency among the works that use the label human-aware, the term itself mostly remains poorly understood. In this work, we retroactively try to provide an account of what constitutes a human-aware AI system. We see that human-aware AI is a design oriented paradigm, one that focuses on the need for modeling the humans it may interact with. Additionally, we see that this paradigm offers us intuitive dimensions to understand and categorize the kinds of interactions these systems might have with humans. We show the pedagogical value of these dimensions by using them as a tool to understand and review the current landscape of work related to human-AI systems that purport some form of human modeling. To fit the scope of a workshop paper, we specifically narrowed our review to papers that deal with sequential decision-making and were published in a major AI conference in the last three years. Our analysis helps identify the space of potential research problems that are currently being overlooked. We perform additional analysis on the degree to which these works make explicit reference to results from social science and whether they actually perform user-studies to validate their systems. We also provide an accounting of the various AI methods used by these works.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
453,852
1901.04056
The Liver Tumor Segmentation Benchmark (LiTS)
In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in \url{http://medicaldecathlon.com/}. In addition, both data and online evaluation are accessible via \url{www.lits-challenge.com}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
118,544
2206.00229
Multi-Object Grasping in the Plane
We consider a novel problem where multiple rigid convex polygonal objects rest in randomly placed positions and orientations on a planar surface visible from an overhead camera. The objective is to efficiently grasp and transport all objects into a bin using multi-object push-grasps, where multiple objects are pushed together to facilitate multi-object grasping. We provide necessary conditions for frictionless multi-object push-grasps and apply these to filter inadmissible grasps in a novel multi-object grasp planner. We find that our planner is 19 times faster than a Mujoco simulator baseline. We also propose a picking algorithm that uses both single- and multi-object grasps to pick objects. In physical grasping experiments comparing performance with a single-object picking baseline, we find that the frictionless multi-object grasping system achieves 13.6\% higher grasp success and is 59.9\% faster, from 212 PPH to 340 PPH. See \url{https://sites.google.com/view/multi-object-grasping} for videos and code.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
300,038
2307.05337
Explaining Competitive-Level Programming Solutions using LLMs
In this paper, we approach competitive-level programming problem-solving as a composite task of reasoning and code generation. We propose a novel method to automatically annotate natural language explanations to \textit{<problem, solution>} pairs. We show that despite poor performance in solving competitive-level programming problems, state-of-the-art LLMs exhibit a strong capacity in describing and explaining solutions. Our explanation generation methodology can generate a structured solution explanation for the problem containing descriptions and analysis. To evaluate the quality of the annotated explanations, we examine their effectiveness in two aspects: 1) satisfying the human programming expert who authored the oracle solution, and 2) aiding LLMs in solving problems more effectively. The experimental results on the CodeContests dataset demonstrate that while LLM GPT3.5's and GPT-4's abilities in describing the solution are comparable, GPT-4 shows a better understanding of the key idea behind the solution.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
378,697
2001.11612
Search for Better Students to Learn Distilled Knowledge
Knowledge Distillation, as a model compression technique, has received great attention. The knowledge of a well-performed teacher is distilled to a student with a small architecture. The architecture of the small student is often chosen to be similar to their teacher's, with fewer layers or fewer channels, or both. However, even with the same number of FLOPs or parameters, the students with different architecture can achieve different generalization ability. The configuration of a student architecture requires intensive network architecture engineering. In this work, instead of designing a good student architecture manually, we propose to search for the optimal student automatically. Based on L1-norm optimization, a subgraph from the teacher network topology graph is selected as a student, the goal of which is to minimize the KL-divergence between student's and teacher's outputs. We verify the proposal on CIFAR10 and CIFAR100 datasets. The empirical experiments show that the learned student architecture achieves better performance than ones specified manually. We also visualize and understand the architecture of the found student.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
162,117