id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1409.3813
Incorporating Semi-supervised Features into Discontinuous Easy-First Constituent Parsing
This paper describes adaptations for EaFi, a parser for easy-first parsing of discontinuous constituents, to adapt it to multiple languages as well as make use of the unlabeled data that was provided as part of the SPMRL shared task 2014.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
36,007
2310.14596
Learning to Correct Noisy Labels for Fine-Grained Entity Typing via Co-Prediction Prompt Tuning
Fine-grained entity typing (FET) is an essential task in natural language processing that aims to assign semantic types to entities in text. However, FET poses a major challenge known as the noise labeling problem, whereby current methods rely on estimating noise distribution to identify noisy labels but are confused by diverse noise distribution deviation. To address this limitation, we introduce Co-Prediction Prompt Tuning for noise correction in FET, which leverages multiple prediction results to identify and correct noisy labels. Specifically, we integrate prediction results to recall labeled labels and utilize a differentiated margin to identify inaccurate labels. Moreover, we design an optimization objective concerning divergent co-predictions during fine-tuning, ensuring that the model captures sufficient information and maintains robustness in noise identification. Experimental results on three widely-used FET datasets demonstrate that our noise correction approach significantly enhances the quality of various types of training samples, including those annotated using distant supervision, ChatGPT, and crowdsourcing.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
401,934
2407.07159
Finding Fake News Websites in the Wild
The battle against the spread of misinformation on the Internet is a daunting task faced by modern society. Fake news content is primarily distributed through digital platforms, with websites dedicated to producing and disseminating such content playing a pivotal role in this complex ecosystem. Therefore, these websites are of great interest to misinformation researchers. However, obtaining a comprehensive list of websites labeled as producers and/or spreaders of misinformation can be challenging, particularly in developing countries. In this study, we propose a novel methodology for identifying websites responsible for creating and disseminating misinformation content, which are closely linked to users who share confirmed instances of fake news on social media. We validate our approach on Twitter by examining various execution modes and contexts. Our findings demonstrate the effectiveness of the proposed methodology in identifying misinformation websites, which can aid in gaining a better understanding of this phenomenon and enabling competent entities to tackle the problem in various areas of society.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
471,662
2308.06451
Semantic Equivariant Mixup
Mixup is a well-established data augmentation technique, which can extend the training distribution and regularize the neural networks by creating ''mixed'' samples based on the label-equivariance assumption, i.e., a proportional mixup of the input data results in the corresponding labels being mixed in the same proportion. However, previous mixup variants may fail to exploit the label-independent information in mixed samples during training, which usually contains richer semantic information. To further release the power of mixup, we first improve the previous label-equivariance assumption by the semantic-equivariance assumption, which states that the proportional mixup of the input data should lead to the corresponding representation being mixed in the same proportion. Then a generic mixup regularization at the representation level is proposed, which can further regularize the model with the semantic information in mixed samples. At a high level, the proposed semantic equivariant mixup (sem) encourages the structure of the input data to be preserved in the representation space, i.e., the change of input will result in the obtained representation information changing in the same way. Different from previous mixup variants, which tend to over-focus on the label-related information, the proposed method aims to preserve richer semantic information in the input with semantic-equivariance assumption, thereby improving the robustness of the model against distribution shifts. We conduct extensive empirical studies and qualitative analyzes to demonstrate the effectiveness of our proposed method. The code of the manuscript is in the supplement.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
385,143
2009.14233
Lip-reading with Densely Connected Temporal Convolutional Networks
In this work, we present the Densely Connected Temporal Convolutional Network (DC-TCN) for lip-reading of isolated words. Although Temporal Convolutional Networks (TCN) have recently demonstrated great potential in many vision tasks, its receptive fields are not dense enough to model the complex temporal dynamics in lip-reading scenarios. To address this problem, we introduce dense connections into the network to capture more robust temporal features. Moreover, our approach utilises the Squeeze-and-Excitation block, a light-weight attention mechanism, to further enhance the model's classification power. Without bells and whistles, our DC-TCN method has achieved 88.36% accuracy on the Lip Reading in the Wild (LRW) dataset and 43.65% on the LRW-1000 dataset, which has surpassed all the baseline methods and is the new state-of-the-art on both datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
197,967
1709.06076
Modelling Energy Consumption based on Resource Utilization
Power management is an expensive and important issue for large computational infrastructures such as datacenters, large clusters, and computational grids. However, measuring energy consumption of scalable systems may be impractical due to both cost and complexity for deploying power metering devices on a large number of machines. In this paper, we propose the use of information about resource utilization (e.g. processor, memory, disk operations, and network traffic) as proxies for estimating power consumption. We employ machine learning techniques to estimate power consumption using such information which are provided by common operating systems. Experiments with linear regression, regression tree, and multilayer perceptron on data from different hardware resulted into a model with 99.94\% of accuracy and 6.32 watts of error in the best case.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
81,024
1111.6682
A General Robust Linear Transceiver Design for Multi-Hop Amplify-and-Forward MIMO Relaying Systems
In this paper, linear transceiver design for multi-hop amplify-and-forward (AF) multiple-input multiple-out (MIMO) relaying systems with Gaussian distributed channel estimation errors is investigated. Commonly used transceiver design criteria including weighted mean-square-error (MSE) minimization, capacity maximization, worst-MSE/MAX-MSE minimization and weighted sum-rate maximization, are considered and unified into a single matrix-variate optimization problem. A general robust design algorithm is proposed to solve the unified problem. Specifically, by exploiting majorization theory and properties of matrix-variate functions, the optimal structure of the robust transceiver is derived when either the covariance matrix of channel estimation errors seen from the transmitter side or the corresponding covariance matrix seen from the receiver side is proportional to an identity matrix. Based on the optimal structure, the original transceiver design problems are reduced to much simpler problems with only scalar variables whose solutions are readily obtained by iterative water-filling algorithm. A number of existing transceiver design algorithms are found to be special cases of the proposed solution. The differences between our work and the existing related work are also discussed in detail. The performance advantages of the proposed robust designs are demonstrated by simulation results.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
13,214
2306.03504
Ada-TTA: Towards Adaptive High-Quality Text-to-Talking Avatar Synthesis
We are interested in a novel task, namely low-resource text-to-talking avatar. Given only a few-minute-long talking person video with the audio track as the training data and arbitrary texts as the driving input, we aim to synthesize high-quality talking portrait videos corresponding to the input text. This task has broad application prospects in the digital human industry but has not been technically achieved yet due to two challenges: (1) It is challenging to mimic the timbre from out-of-domain audio for a traditional multi-speaker Text-to-Speech system. (2) It is hard to render high-fidelity and lip-synchronized talking avatars with limited training data. In this paper, we introduce Adaptive Text-to-Talking Avatar (Ada-TTA), which (1) designs a generic zero-shot multi-speaker TTS model that well disentangles the text content, timbre, and prosody; and (2) embraces recent advances in neural rendering to achieve realistic audio-driven talking face video generation. With these designs, our method overcomes the aforementioned two challenges and achieves to generate identity-preserving speech and realistic talking person video. Experiments demonstrate that our method could synthesize realistic, identity-preserving, and audio-visual synchronized talking avatar videos.
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
371,359
2210.13752
Aboveground carbon biomass estimate with Physics-informed deep network
The global carbon cycle is a key process to understand how our climate is changing. However, monitoring the dynamics is difficult because a high-resolution robust measurement of key state parameters including the aboveground carbon biomass (AGB) is required. Here, we use deep neural network to generate a wall-to-wall map of AGB within the Continental USA (CONUS) with 30-meter spatial resolution for the year 2021. We combine radar and optical hyperspectral imagery, with a physical climate parameter of SIF-based GPP. Validation results show that a masked variation of UNet has the lowest validation RMSE of 37.93 $\pm$ 1.36 Mg C/ha, as compared to 52.30 $\pm$ 0.03 Mg C/ha for random forest algorithm. Furthermore, models that learn from SIF-based GPP in addition to radar and optical imagery reduce validation RMSE by almost 10% and the standard deviation by 40%. Finally, we apply our model to measure losses in AGB from the recent 2021 Caldor wildfire in California, and validate our analysis with Sentinel-based burn index.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
326,293
2402.17572
Hyperdimensional computing: a fast, robust and interpretable paradigm for biological data
Advances in bioinformatics are primarily due to new algorithms for processing diverse biological data sources. While sophisticated alignment algorithms have been pivotal in analyzing biological sequences, deep learning has substantially transformed bioinformatics, addressing sequence, structure, and functional analyses. However, these methods are incredibly data-hungry, compute-intensive and hard to interpret. Hyperdimensional computing (HDC) has recently emerged as an intriguing alternative. The key idea is that random vectors of high dimensionality can represent concepts such as sequence identity or phylogeny. These vectors can then be combined using simple operators for learning, reasoning or querying by exploiting the peculiar properties of high-dimensional spaces. Our work reviews and explores the potential of HDC for bioinformatics, emphasizing its efficiency, interpretability, and adeptness in handling multimodal and structured data. HDC holds a lot of potential for various omics data searching, biosignal analysis and health applications.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
433,055
2009.08248
A Two-stage Stochastic Programming DSO Framework for Comprehensive Market Participation of DER Aggregators under Uncertainty
In this paper, a distribution system operator (DSO) framework is proposed for comprehensive retail and wholesale markets participation of distributed energy resource (DER) aggregators under uncertainty based on two-stage stochastic programming. Different kinds of DER aggregators including energy storage aggregators (ESAGs), demand response aggregators (DRAGs), electric vehicle (EV) aggregating charging stations (EVCSs), dispatchable distributed generation (DDG) aggregators (DDGAGs), and renewable energy aggregators (REAGs) are modeled. Distribution network operation constraints are considered using a linearized power flow. The problem is modeled using mixed-integer linear programming (MILP) which can be solved by using commercial solvers. Case studies are conducted to investigate the performance of the proposed DSO framework.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
196,177
2403.03761
Parameterized quantum comb and simpler circuits for reversing unknown qubit-unitary operations
Quantum comb is an essential tool for characterizing complex quantum protocols in quantum information processing. In this work, we introduce PQComb, a framework leveraging parameterized quantum circuits to explore the capabilities of quantum combs for general quantum process transformation tasks and beyond. By optimizing PQComb for time-reversal simulations of unknown unitary evolutions, we develop a simpler protocol for unknown qubit unitary inversion that reduces the ancilla qubit overhead from 6 to 3 compared to the existing method in [Yoshida, Soeda, Murao, PRL 131, 120602, 2023]. This demonstrates the utility of quantum comb structures and showcases PQComb's potential for solving complex quantum tasks. Our results pave the way for broader PQComb applications in quantum computing and quantum information, emphasizing its versatility for tackling diverse problems in quantum machine learning.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
435,329
2106.13711
Multimodal Emergent Fake News Detection via Meta Neural Process Networks
Fake news travels at unprecedented speeds, reaches global audiences and puts users and communities at great risk via social media platforms. Deep learning based models show good performance when trained on large amounts of labeled data on events of interest, whereas the performance of models tends to degrade on other events due to domain shift. Therefore, significant challenges are posed for existing detection approaches to detect fake news on emergent events, where large-scale labeled datasets are difficult to obtain. Moreover, adding the knowledge from newly emergent events requires to build a new model from scratch or continue to fine-tune the model, which can be challenging, expensive, and unrealistic for real-world settings. In order to address those challenges, we propose an end-to-end fake news detection framework named MetaFEND, which is able to learn quickly to detect fake news on emergent events with a few verified posts. Specifically, the proposed model integrates meta-learning and neural process methods together to enjoy the benefits of these approaches. In particular, a label embedding module and a hard attention mechanism are proposed to enhance the effectiveness by handling categorical information and trimming irrelevant posts. Extensive experiments are conducted on multimedia datasets collected from Twitter and Weibo. The experimental results show our proposed MetaFEND model can detect fake news on never-seen events effectively and outperform the state-of-the-art methods.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
243,156
1712.08107
A Deep Learning Interpretable Classifier for Diabetic Retinopathy Disease Grading
Deep neural network models have been proven to be very successful in image classification tasks, also for medical diagnosis, but their main concern is its lack of interpretability. They use to work as intuition machines with high statistical confidence but unable to give interpretable explanations about the reported results. The vast amount of parameters of these models make difficult to infer a rationale interpretation from them. In this paper we present a diabetic retinopathy interpretable classifier able to classify retine images into the different levels of disease severity and of explaining its results by assigning a score for every point in the hidden and input space, evaluating its contribution to the final classification in a linear way. The generated visual maps can be interpreted by an expert in order to compare its own knowledge with the interpretation given by the model.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
87,136
1912.04050
PhoneBit: Efficient GPU-Accelerated Binary Neural Network Inference Engine for Mobile Phones
Over the last years, a great success of deep neural networks (DNNs) has been witnessed in computer vision and other fields. However, performance and power constraints make it still challenging to deploy DNNs on mobile devices due to their high computational complexity. Binary neural networks (BNNs) have been demonstrated as a promising solution to achieve this goal by using bit-wise operations to replace most arithmetic operations. Currently, existing GPU-accelerated implementations of BNNs are only tailored for desktop platforms. Due to architecture differences, mere porting of such implementations to mobile devices yields suboptimal performance or is impossible in some cases. In this paper, we propose PhoneBit, a GPU-accelerated BNN inference engine for Android-based mobile devices that fully exploits the computing power of BNNs on mobile GPUs. PhoneBit provides a set of operator-level optimizations including locality-friendly data layout, bit packing with vectorization and layers integration for efficient binary convolution. We also provide a detailed implementation and parallelization optimization for PhoneBit to optimally utilize the memory bandwidth and computing power of mobile GPUs. We evaluate PhoneBit with AlexNet, YOLOv2 Tiny and VGG16 with their binary version. Our experiment results show that PhoneBit can achieve significant speedup and energy efficiency compared with state-of-the-art frameworks for mobile devices.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
156,749
1705.08508
Vehicle Traffic Driven Camera Placement for Better Metropolis Security Surveillance
Security surveillance is one of the most important issues in smart cities, especially in an era of terrorism. Deploying a number of (video) cameras is a common surveillance approach. Given the never-ending power offered by vehicles to metropolises, exploiting vehicle traffic to design camera placement strategies could potentially facilitate security surveillance. This article constitutes the first effort toward building the linkage between vehicle traffic and security surveillance, which is a critical problem for smart cities. We expect our study could influence the decision making of surveillance camera placement, and foster more research of principled ways of security surveillance beneficial to our physical-world life. Code has been made publicly available.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
true
74,034
1110.1708
Advancing Nuclear Physics Through TOPS Solvers and Tools
At the heart of many scientific applications is the solution of algebraic systems, such as linear systems of equations, eigenvalue problems, and optimization problems, to name a few. TOPS, which stands for Towards Optimal Petascale Simulations, is a SciDAC applied math center focused on the development of solvers for tackling these algebraic systems, as well as the deployment of such technologies in large-scale scientific applications of interest to the U.S. Department of Energy. In this paper, we highlight some of the solver technologies we have developed in optimization and matrix computations. We also describe some accomplishments achieved using these technologies in UNEDF, a SciDAC application project on nuclear physics.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
12,541
2112.13047
Channel-Wise Attention-Based Network for Self-Supervised Monocular Depth Estimation
Self-supervised learning has shown very promising results for monocular depth estimation. Scene structure and local details both are significant clues for high-quality depth estimation. Recent works suffer from the lack of explicit modeling of scene structure and proper handling of details information, which leads to a performance bottleneck and blurry artefacts in predicted results. In this paper, we propose the Channel-wise Attention-based Depth Estimation Network (CADepth-Net) with two effective contributions: 1) The structure perception module employs the self-attention mechanism to capture long-range dependencies and aggregates discriminative features in channel dimensions, explicitly enhances the perception of scene structure, obtains the better scene understanding and rich feature representation. 2) The detail emphasis module re-calibrates channel-wise feature maps and selectively emphasizes the informative features, aiming to highlight crucial local details information and fuse different level features more efficiently, resulting in more precise and sharper depth prediction. Furthermore, the extensive experiments validate the effectiveness of our method and show that our model achieves the state-of-the-art results on the KITTI benchmark and Make3D datasets.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
273,126
1302.2712
Bayesian Nonparametric Dictionary Learning for Compressed Sensing MRI
We develop a Bayesian nonparametric model for reconstructing magnetic resonance images (MRI) from highly undersampled k-space data. We perform dictionary learning as part of the image reconstruction process. To this end, we use the beta process as a nonparametric dictionary learning prior for representing an image patch as a sparse combination of dictionary elements. The size of the dictionary and the patch-specific sparsity pattern are inferred from the data, in addition to other dictionary learning variables. Dictionary learning is performed directly on the compressed image, and so is tailored to the MRI being considered. In addition, we investigate a total variation penalty term in combination with the dictionary learning model, and show how the denoising property of dictionary learning removes dependence on regularization parameters in the noisy setting. We derive a stochastic optimization algorithm based on Markov Chain Monte Carlo (MCMC) for the Bayesian model, and use the alternating direction method of multipliers (ADMM) for efficiently performing total variation minimization. We present empirical results on several MRI, which show that the proposed regularization framework can improve reconstruction accuracy over other methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
21,965
1504.05289
Distance-based species tree estimation: information-theoretic trade-off between number of loci and sequence length under the coalescent
We consider the reconstruction of a phylogeny from multiple genes under the multispecies coalescent. We establish a connection with the sparse signal detection problem, where one seeks to distinguish between a distribution and a mixture of the distribution and a sparse signal. Using this connection, we derive an information-theoretic trade-off between the number of genes, $m$, needed for an accurate reconstruction and the sequence length, $k$, of the genes. Specifically, we show that to detect a branch of length $f$, one needs $m = \Theta(1/[f^{2} \sqrt{k}])$.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
42,247
2312.11703
Shaping Political Discourse using multi-source News Summarization
Multi-document summarization is the process of automatically generating a concise summary of multiple documents related to the same topic. This summary can help users quickly understand the key information from a large collection of documents. Multi-document summarization systems are more complex than single-document summarization systems due to the need to identify and combine information from multiple sources. In this paper, we have developed a machine learning model that generates a concise summary of a topic from multiple news documents. The model is designed to be unbiased by sampling its input equally from all the different aspects of the topic, even if the majority of the news sources lean one way.
false
false
false
false
false
true
true
false
true
false
false
false
false
true
false
false
false
false
416,681
2409.02738
SOAR: Simultaneous Exploration and Photographing with Heterogeneous UAVs for Fast Autonomous Reconstruction
Unmanned Aerial Vehicles (UAVs) have gained significant popularity in scene reconstruction. This paper presents SOAR, a LiDAR-Visual heterogeneous multi-UAV system specifically designed for fast autonomous reconstruction of complex environments. Our system comprises a LiDAR-equipped explorer with a large field-of-view (FoV), alongside photographers equipped with cameras. To ensure rapid acquisition of the scene's surface geometry, we employ a surface frontier-based exploration strategy for the explorer. As the surface is progressively explored, we identify the uncovered areas and generate viewpoints incrementally. These viewpoints are then assigned to photographers through solving a Consistent Multiple Depot Multiple Traveling Salesman Problem (Consistent-MDMTSP), which optimizes scanning efficiency while ensuring task consistency. Finally, photographers utilize the assigned viewpoints to determine optimal coverage paths for acquiring images. We present extensive benchmarks in the realistic simulator, which validates the performance of SOAR compared with classical and state-of-the-art methods. For more details, please see our project page at https://sysu-star.github.io/SOAR}{sysu-star.github.io/SOAR.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
485,822
2101.11041
Complementary Composite Minimization, Small Gradients in General Norms, and Applications
Composite minimization is a powerful framework in large-scale convex optimization, based on decoupling of the objective function into terms with structurally different properties and allowing for more flexible algorithmic design. We introduce a new algorithmic framework for complementary composite minimization, where the objective function decouples into a (weakly) smooth and a uniformly convex term. This particular form of decoupling is pervasive in statistics and machine learning, due to its link to regularization. The main contributions of our work are summarized as follows. First, we introduce the problem of complementary composite minimization in general normed spaces; second, we provide a unified accelerated algorithmic framework to address broad classes of complementary composite minimization problems; and third, we prove that the algorithms resulting from our framework are near-optimal in most of the standard optimization settings. Additionally, we show that our algorithmic framework can be used to address the problem of making the gradients small in general normed spaces. As a concrete example, we obtain a nearly-optimal method for the standard $\ell_1$ setup (small gradients in the $\ell_{\infty}$ norm), essentially matching the bound of Nesterov (2012) that was previously known only for the Euclidean setup. Finally, we show that our composite methods are broadly applicable to a number of regression and other classes of optimization problems, where regularization plays a key role. Our methods lead to complexity bounds that are either new or match the best existing ones.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
217,135
2103.11944
Learning Accurate Business Process Simulation Models from Event Logs via Automated Process Discovery and Deep Learning
Business process simulation is a well-known approach to estimate the impact of changes to a process with respect to time and cost measures -- a practice known as what-if process analysis. The usefulness of such estimations hinges on the accuracy of the underlying simulation model. Data-Driven Simulation (DDS) methods leverage process mining techniques to learn process simulation models from event logs. Empirical studies have shown that, while DDS models adequately capture the observed sequences of activities and their frequencies, they fail to accurately capture the temporal dynamics of real-life processes. In contrast, generative Deep Learning (DL) models are better able to capture such temporal dynamics. The drawback of DL models is that users cannot alter them for what-if analysis due to their black-box nature. This paper presents a hybrid approach to learn process simulation models from event logs wherein a (stochastic) process model is extracted via DDS techniques, and then combined with a DL model to generate timestamped event sequences. An experimental evaluation shows that the resulting hybrid simulation models match the temporal accuracy of pure DL models, while partially retaining the what-if analysis capability of DDS approaches.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
226,018
2207.05054
Demystifying Unsupervised Semantic Correspondence Estimation
We explore semantic correspondence estimation through the lens of unsupervised learning. We thoroughly evaluate several recently proposed unsupervised methods across multiple challenging datasets using a standardized evaluation protocol where we vary factors such as the backbone architecture, the pre-training strategy, and the pre-training and finetuning datasets. To better understand the failure modes of these methods, and in order to provide a clearer path for improvement, we provide a new diagnostic framework along with a new performance metric that is better suited to the semantic matching task. Finally, we introduce a new unsupervised correspondence approach which utilizes the strength of pre-trained features while encouraging better matches during training. This results in significantly better matching performance compared to current state-of-the-art methods.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
307,404
2408.14811
Brain-inspired Artificial Intelligence: A Comprehensive Review
Current artificial intelligence (AI) models often focus on enhancing performance through meticulous parameter tuning and optimization techniques. However, the fundamental design principles behind these models receive comparatively less attention, which can limit our understanding of their potential and constraints. This comprehensive review explores the diverse design inspirations that have shaped modern AI models, i.e., brain-inspired artificial intelligence (BIAI). We present a classification framework that categorizes BIAI approaches into physical structure-inspired and human behavior-inspired models. We also examine the real-world applications where different BIAI models excel, highlighting their practical benefits and deployment challenges. By delving into these areas, we provide new insights and propose future research directions to drive innovation and address current gaps in the field. This review offers researchers and practitioners a comprehensive overview of the BIAI landscape, helping them harness its potential and expedite advancements in AI development.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
483,684
2312.03262
Low-Cost High-Power Membership Inference Attacks
Membership inference attacks aim to detect if a particular data point was used in training a model. We design a novel statistical test to perform robust membership inference attacks (RMIA) with low computational overhead. We achieve this by a fine-grained modeling of the null hypothesis in our likelihood ratio tests, and effectively leveraging both reference models and reference population data samples. RMIA has superior test power compared with prior methods, throughout the TPR-FPR curve (even at extremely low FPR, as low as 0). Under computational constraints, where only a limited number of pre-trained reference models (as few as 1) are available, and also when we vary other elements of the attack (e.g., data distribution), our method performs exceptionally well, unlike prior attacks that approach random guessing. RMIA lays the groundwork for practical yet accurate data privacy risk assessment in machine learning.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
413,190
cs/0611073
Prefix Codes for Power Laws with Countable Support
In prefix coding over an infinite alphabet, methods that consider specific distributions generally consider those that decline more quickly than a power law (e.g., Golomb coding). Particular power-law distributions, however, model many random variables encountered in practice. For such random variables, compression performance is judged via estimates of expected bits per input symbol. This correspondence introduces a family of prefix codes with an eye towards near-optimal coding of known distributions. Compression performance is precisely estimated for well-known probability distributions using these codes and using previously known prefix codes. One application of these near-optimal codes is an improved representation of rational numbers.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
539,878
2210.02400
Emotion Twenty Questions Dialog System for Lexical Emotional Intelligence
This paper presents a web-based demonstration of Emotion Twenty Questions (EMO20Q), a dialog game whose purpose is to study how people describe emotions. EMO20Q can also be used to develop artificially intelligent dialog agents that can play the game. In previous work, an EMO20Q agent used a sequential Bayesian machine learning model and could play the question-asking role. Newer transformer-based neural machine learning models have made it possible to develop an agent for the question-answering role. This demo paper describes the recent developments in the question-answering role of the EMO20Q game, which requires the agent to respond to more open-ended inputs. Furthermore, we also describe the design of the system, including the web-based front-end, agent architecture and programming, and updates to earlier software used. The demo system will be available to collect pilot data during the ACII conference and this data will be used to inform future experiments and system design.
true
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
321,634
2002.06652
SBERT-WK: A Sentence Embedding Method by Dissecting BERT-based Word Models
Sentence embedding is an important research topic in natural language processing (NLP) since it can transfer knowledge to downstream tasks. Meanwhile, a contextualized word representation, called BERT, achieves the state-of-the-art performance in quite a few NLP tasks. Yet, it is an open problem to generate a high quality sentence representation from BERT-based word models. It was shown in previous study that different layers of BERT capture different linguistic properties. This allows us to fusion information across layers to find better sentence representation. In this work, we study the layer-wise pattern of the word representation of deep contextualized models. Then, we propose a new sentence embedding method by dissecting BERT-based word models through geometric analysis of the space spanned by the word representation. It is called the SBERT-WK method. No further training is required in SBERT-WK. We evaluate SBERT-WK on semantic textual similarity and downstream supervised tasks. Furthermore, ten sentence-level probing tasks are presented for detailed linguistic analysis. Experiments show that SBERT-WK achieves the state-of-the-art performance. Our codes are publicly available.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
164,258
2401.06152
Graph-Network-Based Predictive Modeling for Highly Cross-Linked Polymer Systems
In this study, a versatile methodology for initiating polymerization from monomers in highly cross-linked materials is investigated. As polymerization progresses, force-field parameters undergo continuous modification due to the formation of new chemical bonds. This dynamic process not only impacts the atoms directly involved in bonding, but also influences the neighboring atomic environment. Monitoring these complex changes in highly cross-linked structures poses a challenge. To address this issue, we introduce a graph-network-based algorithm that offers both rapid and accurate predictions. The algorithm merges polymer construction protocols with LAMMPS, a large-scale molecular dynamics simulation software. The adaptability of this code has been demonstrated by its successful application to various amorphous polymers, including porous polymer networks (PPNs), and epoxy-resins, while the algorithm has been employed for additional tasks, such as implementing pore-piercing deformations and calculating material properties.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
421,038
2502.10184
Realistic Evaluation of Deep Partial-Label Learning Algorithms
Partial-label learning (PLL) is a weakly supervised learning problem in which each example is associated with multiple candidate labels and only one is the true label. In recent years, many deep PLL algorithms have been developed to improve model performance. However, we find that some early developed algorithms are often underestimated and can outperform many later algorithms with complicated designs. In this paper, we delve into the empirical perspective of PLL and identify several critical but previously overlooked issues. First, model selection for PLL is non-trivial, but has never been systematically studied. Second, the experimental settings are highly inconsistent, making it difficult to evaluate the effectiveness of the algorithms. Third, there is a lack of real-world image datasets that can be compatible with modern network architectures. Based on these findings, we propose PLENCH, the first Partial-Label learning bENCHmark to systematically compare state-of-the-art deep PLL algorithms. We investigate the model selection problem for PLL for the first time, and propose novel model selection criteria with theoretical guarantees. We also create Partial-Label CIFAR-10 (PLCIFAR10), an image dataset of human-annotated partial labels collected from Amazon Mechanical Turk, to provide a testbed for evaluating the performance of PLL algorithms in more realistic scenarios. Researchers can quickly and conveniently perform a comprehensive and fair evaluation and verify the effectiveness of newly developed algorithms based on PLENCH. We hope that PLENCH will facilitate standardized, fair, and practical evaluation of PLL algorithms in the future.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
533,765
2310.02133
Learning Reliable Logical Rules with SATNet
Bridging logical reasoning and deep learning is crucial for advanced AI systems. In this work, we present a new framework that addresses this goal by generating interpretable and verifiable logical rules through differentiable learning, without relying on pre-specified logical structures. Our approach builds upon SATNet, a differentiable MaxSAT solver that learns the underlying rules from input-output examples. Despite its efficacy, the learned weights in SATNet are not straightforwardly interpretable, failing to produce human-readable rules. To address this, we propose a novel specification method called "maximum equality", which enables the interchangeability between the learned weights of SATNet and a set of propositional logical rules in weighted MaxSAT form. With the decoded weighted MaxSAT formula, we further introduce several effective verification techniques to validate it against the ground truth rules. Experiments on stream transformations and Sudoku problems show that our decoded rules are highly reliable: using exact solvers on them could achieve 100% accuracy, whereas the original SATNet fails to give correct solutions in many cases. Furthermore, we formally verify that our decoded logical rules are functionally equivalent to the ground truth ones.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
396,721
0706.1179
Collaborative product and process model: Multiple Viewpoints approach
The design and development of complex products invariably involves many actors who have different points of view on the problem they are addressing, the product being developed, and the process by which it is being developed. The actors' viewpoints approach was designed to provide an organisational framework in which these different perspectives or points of views, and their relationships, could be explicitly gathered and formatted (by actor activity's focus). The approach acknowledges the inevitability of multiple interpretation of product information as different views, promotes gathering of actors' interests, and encourages retrieved adequate information while providing support for integration through PLM and/or SCM collaboration. In this paper, we present our multiple viewpoints approach, and we illustrate it by an industrial example on cyclone vessel product.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
321
2312.07048
Edge Wasserstein Distance Loss for Oriented Object Detection
Regression loss design is an essential topic for oriented object detection. Due to the periodicity of the angle and the ambiguity of width and height definition, traditional L1-distance loss and its variants have been suffered from the metric discontinuity and the square-like problem. As a solution, the distribution based methods show significant advantages by representing oriented boxes as distributions. Differing from exploited the Gaussian distribution to get analytical form of distance measure, we propose a novel oriented regression loss, Wasserstein Distance(EWD) loss, to alleviate the square-like problem. Specifically, for the oriented box(OBox) representation, we choose a specially-designed distribution whose probability density function is only nonzero over the edges. On this basis, we develop Wasserstein distance as the measure. Besides, based on the edge representation of OBox, the EWD loss can be generalized to quadrilateral and polynomial regression scenarios. Experiments on multiple popular datasets and different detectors show the effectiveness of the proposed method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
414,777
2302.07412
A theory of desirable things
Inspired by the theory of desirable gambles that is used to model uncertainty in the field of imprecise probabilities, I present a theory of desirable things. Its aim is to model a subject's beliefs about which things are desirable. What the things are is not important, nor is what it means for them to be desirable. It can be applied to gambles, calling them desirable if a subject accepts them, but it can just as well be applied to pizzas, calling them desirable if my friend Arthur likes to eat them. Other useful examples of things one might apply this theory to are propositions, horse lotteries, or preferences between any of the above. Regardless of the particular things that are considered, inference rules are imposed by means of an abstract closure operator, and models that adhere to these rules are called coherent. I consider two types of models, each of which can capture a subject's beliefs about which things are desirable: sets of desirable things and sets of desirable sets of things. A crucial result is that the latter type can be represented by a set of the former.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
345,721
1411.5224
Can we build a conscious machine?
The underlying physiological mechanisms of generating conscious states are still unknown. To make progress on the problem of consciousness, we will need to experimentally design a system that evolves in a similar way our brains do. Recent experimental data show that the multiscale nature of the evolving human brain can be implemented by reprogramming human cells. A hybrid system can be designed to include an evolving brain equipped with digital computers that maintain homeostasis and provide the right amount of nutrients and oxygen for the brain growth. Shaping the structure of the evolving brain will be progressively achieved by controlling spatial organization of various types of cells. Following a specific program, the evolving brain can be trained using substitutional reality to learn and experience live scenes. We already know from neuroelectrodynamics that meaningful information in the brain is electrically (wirelessly) read out and written fast in neurons and synapses at the molecular (protein) level during the generation of action potentials and synaptic activities. Since with training, meaningful information accumulates and is electrically integrated in the brain, one can predict, that this gradual process of training will trigger a tipping point for conscious experience to emerge in the hybrid system.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
37,715
1706.02061
An Extended Relevance Model for Session Search
The session search task aims at best serving the user's information need given her previous search behavior during the session. We propose an extended relevance model that captures the user's dynamic information need in the session. Our relevance modelling approach is directly driven by the user's query reformulation (change) decisions and the estimate of how much the user's search behavior affects such decisions. Overall, we demonstrate that, the proposed approach significantly boosts session search performance.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
74,911
2211.01226
DEArt: Dataset of European Art
Large datasets that were made publicly available to the research community over the last 20 years have been a key enabling factor for the advances in deep learning algorithms for NLP or computer vision. These datasets are generally pairs of aligned image / manually annotated metadata, where images are photographs of everyday life. Scholarly and historical content, on the other hand, treat subjects that are not necessarily popular to a general audience, they may not always contain a large number of data points, and new data may be difficult or impossible to collect. Some exceptions do exist, for instance, scientific or health data, but this is not the case for cultural heritage (CH). The poor performance of the best models in computer vision - when tested over artworks - coupled with the lack of extensively annotated datasets for CH, and the fact that artwork images depict objects and actions not captured by photographs, indicate that a CH-specific dataset would be highly valuable for this community. We propose DEArt, at this point primarily an object detection and pose classification dataset meant to be a reference for paintings between the XIIth and the XVIIIth centuries. It contains more than 15000 images, about 80% non-iconic, aligned with manual annotations for the bounding boxes identifying all instances of 69 classes as well as 12 possible poses for boxes identifying human-like objects. Of these, more than 50 classes are CH-specific and thus do not appear in other datasets; these reflect imaginary beings, symbolic entities and other categories related to art. Additionally, existing datasets do not include pose annotations. Our results show that object detectors for the cultural heritage domain can achieve a level of precision comparable to state-of-art models for generic images via transfer learning.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
328,147
2412.17616
Facial Expression Analysis and Its Potentials in IoT Systems: A Contemporary Survey
Facial expressions convey human emotions and can be categorized into macro-expressions (MaEs) and micro-expressions (MiEs) based on duration and intensity. While MaEs are voluntary and easily recognized, MiEs are involuntary, rapid, and can reveal concealed emotions. The integration of facial expression analysis with Internet-of-Thing (IoT) systems has significant potential across diverse scenarios. IoT-enhanced MaE analysis enables real-time monitoring of patient emotions, facilitating improved mental health care in smart healthcare. Similarly, IoT-based MiE detection enhances surveillance accuracy and threat detection in smart security. This work aims at providing a comprehensive overview of research progress in facial expression analysis and explores its integration with IoT systems. We discuss the distinctions between our work and existing surveys, elaborate on advancements in MaE and MiE techniques across various learning paradigms, and examine their potential applications in IoT. We highlight challenges and future directions for the convergence of facial expression-based technologies and IoT systems, aiming to foster innovation in this domain. By presenting recent developments and practical applications, this study offers a systematic understanding of how facial expression analysis can enhance IoT systems in healthcare, security, and beyond.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
520,052
2103.05434
When is it permissible for artificial intelligence to lie? A trust-based approach
Conversational Artificial Intelligence (AI) used in industry settings can be trained to closely mimic human behaviors, including lying and deception. However, lying is often a necessary part of negotiation. To address this, we develop a normative framework for when it is ethical or unethical for a conversational AI to lie to humans, based on whether there is what we call "invitation of trust" in a particular scenario. Importantly, cultural norms play an important role in determining whether there is invitation of trust across negotiation settings, and thus an AI trained in one culture may not be generalizable to others. Moreover, individuals may have different expectations regarding the invitation of trust and propensity to lie for human vs. AI negotiators, and these expectations may vary across cultures as well. Finally, we outline how a conversational chatbot can be trained to negotiate ethically by applying autoregressive models to large dialog and negotiations datasets.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
223,975
1504.04090
Segmentation of Subspaces in Sequential Data
We propose Ordered Subspace Clustering (OSC) to segment data drawn from a sequentially ordered union of subspaces. Similar to Sparse Subspace Clustering (SSC) we formulate the problem as one of finding a sparse representation but include an additional penalty term to take care of sequential data. We test our method on data drawn from infrared hyper spectral, video and motion capture data. Experiments show that our method, OSC, outperforms the state of the art methods: Spatial Subspace Clustering (SpatSC), Low-Rank Representation (LRR) and SSC.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
42,101
2108.03873
Rain Removal and Illumination Enhancement Done in One Go
Rain removal plays an important role in the restoration of degraded images. Recently, data-driven methods have achieved remarkable success. However, these approaches neglect that the appearance of rain is often accompanied by low light conditions, which will further degrade the image quality. Therefore, it is very indispensable to jointly remove the rain and enhance the light for real-world rain image restoration. In this paper, we aim to address this problem from two aspects. First, we proposed a novel entangled network, namely EMNet, which can remove the rain and enhance illumination in one go. Specifically, two encoder-decoder networks interact complementary information through entanglement structure, and parallel rain removal and illumination enhancement. Considering that the encoder-decoder structure is unreliable in preserving spatial details, we employ a detail recovery network to restore the desired fine texture. Second, we present a new synthetic dataset, namely DarkRain, to boost the development of rain image restoration algorithms in practical scenarios. DarkRain not only contains different degrees of rain, but also considers different lighting conditions, and more realistically simulates the rainfall in the real world. EMNet is extensively evaluated on the proposed benchmark and achieves state-of-the-art results. In addition, after a simple transformation, our method outshines existing methods in both rain removal and low-light image enhancement. The source code and dataset will be made publicly available later.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
249,810
2311.02282
Contrastive Multi-Modal Representation Learning for Spark Plug Fault Diagnosis
Due to the incapability of one sensory measurement to provide enough information for condition monitoring of some complex engineered industrial mechanisms and also for overcoming the misleading noise of a single sensor, multiple sensors are installed to improve the condition monitoring of some industrial equipment. Therefore, an efficient data fusion strategy is demanded. In this research, we presented a Denoising Multi-Modal Autoencoder with a unique training strategy based on contrastive learning paradigm, both being utilized for the first time in the machine health monitoring realm. The presented approach, which leverages the merits of both supervised and unsupervised learning, not only achieves excellent performance in fusing multiple modalities (or views) of data into an enriched common representation but also takes data fusion to the next level wherein one of the views can be omitted during inference time with very slight performance reduction, or even without any reduction at all. The presented methodology enables multi-modal fault diagnosis systems to perform more robustly in case of sensor failure occurrence, and one can also intentionally omit one of the sensors (the more expensive one) in order to build a more cost-effective condition monitoring system without sacrificing performance for practical purposes. The effectiveness of the presented methodology is examined on a real-world private multi-modal dataset gathered under non-laboratory conditions from a complex engineered mechanism, an inline four-stroke spark-ignition engine, aiming for spark plug fault diagnosis. This dataset, which contains the accelerometer and acoustic signals as two modalities, has a very slight amount of fault, and achieving good performance on such a dataset promises that the presented method can perform well on other equipment as well.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
405,372
2208.03907
Bridging the Gap: Commonality and Differences between Online and Offline COVID-19 Data
With the onset of the COVID-19 pandemic, news outlets and social media have become central tools for disseminating and consuming information. Because of their ease of access, users seek COVID-19-related information from online social media (i.e., online news) and news outlets (i.e., offline news). Online and offline news are often connected, sharing common topics while each has unique, different topics. A gap between these two news sources can lead to misinformation propagation. For instance, according to the Guardian, most COVID-19 misinformation comes from users on social media. Without fact-checking social media news, misinformation can lead to health threats. In this paper, we focus on the novel problem of bridging the gap between online and offline data by monitoring their common and distinct topics generated over time. We employ Twitter (online) and local news (offline) data for a time span of two years. Using online matrix factorization, we analyze and study online and offline COVID-19-related data differences and commonalities. We design experiments to show how online and offline data are linked together and what trends they follow.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
311,937
1601.08162
A State-of-the-art Integrated Transportation Simulation Platform
Nowadays, universities and companies have a huge need for simulation and modelling methodologies. In the particular case of traffic and transportation, making physical modifications to the real traffic networks could be highly expensive, dependent on political decisions and could be highly disruptive to the environment. However, while studying a specific domain or problem, analysing a problem through simulation may not be trivial and may need several simulation tools, hence raising interoperability issues. To overcome these problems, we propose an agent-directed transportation simulation platform, through the cloud, by means of services. We intend to use the IEEE standard HLA (High Level Architecture) for simulators interoperability and agents for controlling and coordination. Our motivations are to allow multiresolution analysis of complex domains, to allow experts to collaborate on the analysis of a common problem and to allow co-simulation and synergy of different application domains. This paper will start by presenting some preliminary background concepts to help better understand the scope of this work. After that, the results of a literature review is shown. Finally, the general architecture of a transportation simulation platform is proposed.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
51,503
1703.02102
Revisiting stochastic off-policy action-value gradients
Off-policy stochastic actor-critic methods rely on approximating the stochastic policy gradient in order to derive an optimal policy. One may also derive the optimal policy by approximating the action-value gradient. The use of action-value gradients is desirable as policy improvement occurs along the direction of steepest ascent. This has been studied extensively within the context of natural gradient actor-critic algorithms and more recently within the context of deterministic policy gradients. In this paper we briefly discuss the off-policy stochastic counterpart to deterministic action-value gradients, as well as an incremental approach for following the policy gradient in lieu of the natural gradient.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
69,490
2210.05710
Predictive Event Segmentation and Representation with Neural Networks: A Self-Supervised Model Assessed by Psychological Experiments
People segment complex, ever-changing and continuous experience into basic, stable and discrete spatio-temporal experience units, called events. Event segmentation literature investigates the mechanisms that allow people to extract events. Event segmentation theory points out that people predict ongoing activities and observe prediction error signals to find event boundaries that keep events apart. In this study, we investigated the mechanism giving rise to this ability by a computational model and accompanying psychological experiments. Inspired from event segmentation theory and predictive processing, we introduced a self-supervised model of event segmentation. This model consists of neural networks that predict the sensory signal in the next time-step to represent different events, and a cognitive model that regulates these networks on the basis of their prediction errors. In order to verify the ability of our model in segmenting events, learning them during passive observation, and representing them in its internal representational space, we prepared a video that depicts human behaviors represented by point-light displays. We compared event segmentation behaviors of participants and our model with this video in two hierarchical event segmentation levels. By using point-biserial correlation technique, we demonstrated that event segmentation decisions of our model correlated with the responses of participants. Moreover, by approximating representation space of participants by a similarity-based technique, we showed that our model formed a similar representation space with those of participants. The result suggests that our model that tracks the prediction error signals can produce human-like event boundaries and event representations. Finally, we discussed our contribution to the literature of event cognition and our understanding of how event segmentation is implemented in the brain.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
322,966
1003.5309
Gossip Algorithms for Distributed Signal Processing
Gossip algorithms are attractive for in-network processing in sensor networks because they do not require any specialized routing, there is no bottleneck or single point of failure, and they are robust to unreliable wireless network conditions. Recently, there has been a surge of activity in the computer science, control, signal processing, and information theory communities, developing faster and more robust gossip algorithms and deriving theoretical performance guarantees. This article presents an overview of recent work in the area. We describe convergence rate results, which are related to the number of transmitted messages and thus the amount of energy consumed in the network for gossiping. We discuss issues related to gossiping over wireless links, including the effects of quantization and noise, and we illustrate the use of gossip algorithms for canonical signal processing tasks including distributed estimation, source localization, and compression.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
6,018
2204.11841
A Closer Look at Personalization in Federated Image Classification
Federated Learning (FL) is developed to learn a single global model across the decentralized data, while is susceptible when realizing client-specific personalization in the presence of statistical heterogeneity. However, studies focus on learning a robust global model or personalized classifiers, which yield divergence due to inconsistent objectives. This paper shows that it is possible to achieve flexible personalization after the convergence of the global model by introducing representation learning. In this paper, we first analyze and determine that non-IID data harms representation learning of the global model. Existing FL methods adhere to the scheme of jointly learning representations and classifiers, where the global model is an average of classification-based local models that are consistently subject to heterogeneity from non-IID data. As a solution, we separate representation learning from classification learning in FL and propose RepPer, an independent two-stage personalized FL framework.We first learn the client-side feature representation models that are robust to non-IID data and aggregate them into a global common representation model. After that, we achieve personalization by learning a classifier head for each client, based on the common representation obtained at the former stage. Notably, the proposed two-stage learning scheme of RepPer can be potentially used for lightweight edge computing that involves devices with constrained computation power.Experiments on various datasets (CIFAR-10/100, CINIC-10) and heterogeneous data setup show that RepPer outperforms alternatives in flexibility and personalization on non-IID data.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
293,285
2112.06677
Firefly: Supporting Drone Localization With Visible Light Communication
Drones are not fully trusted yet. Their reliance on radios and cameras for navigation raises safety and privacy concerns. These systems can fail, causing accidents, or be misused for unauthorized recordings. Considering recent regulations allowing commercial drones to operate only at night, we propose a radically new approach where drones obtain navigation information from artificial lighting. In our system, standard light bulbs modulate their intensity to send beacons and drones decode this information with a simple photodiode. This optical information is combined with the inertial and altitude sensors in the drones to provide localization without the need for radios, GPS or cameras. Our framework is the first to provide 3D drone localization with light and we evaluate it with a testbed consisting of four light beacons and a mini-drone. We show that, our approach allows to locate the drone within a few decimeters of the actual position and compared to state-of-the-art positioning methods, reduces the localization error by 42%.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
271,262
2410.08917
AutoPersuade: A Framework for Evaluating and Explaining Persuasive Arguments
We introduce AutoPersuade, a three-part framework for constructing persuasive messages. First, we curate a large dataset of arguments with human evaluations. Next, we develop a novel topic model to identify argument features that influence persuasiveness. Finally, we use this model to predict the effectiveness of new arguments and assess the causal impact of different components to provide explanations. We validate AutoPersuade through an experimental study on arguments for veganism, demonstrating its effectiveness with human studies and out-of-sample predictions.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
497,342
1907.12917
Covering up bias in CelebA-like datasets with Markov blankets: A post-hoc cure for attribute prior avoidance
Attribute prior avoidance entails subconscious or willful non-modeling of (meta)attributes that datasets are oft born with, such as the 40 semantic facial attributes associated with the CelebA and CelebA-HQ datasets. The consequences of this infirmity, we discover, are especially stark in state-of-the-art deep generative models learned on these datasets that just model the pixel-space measurements, resulting in an inter-attribute bias-laden latent space. This viscerally manifests itself when we perform face manipulation experiments based on latent vector interpolations. In this paper, we address this and propose a post-hoc solution that utilizes an Ising attribute prior learned in the attribute space and showcase its efficacy via qualitative experiments.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
140,240
2502.07017
Finding Words Associated with DIF: Predicting Differential Item Functioning using LLMs and Explainable AI
We fine-tuned and compared several encoder-based Transformer large language models (LLM) to predict differential item functioning (DIF) from the item text. We then applied explainable artificial intelligence (XAI) methods to these models to identify specific words associated with DIF. The data included 42,180 items designed for English language arts and mathematics summative state assessments among students in grades 3 to 11. Prediction $R^2$ ranged from .04 to .32 among eight focal and reference group pairs. Our findings suggest that many words associated with DIF reflect minor sub-domains included in the test blueprint by design, rather than construct-irrelevant item content that should be removed from assessments. This may explain why qualitative reviews of DIF items often yield confusing or inconclusive results. Our approach can be used to screen words associated with DIF during the item-writing process for immediate revision, or help review traditional DIF analysis results by highlighting key words in the text. Extensions of this research can enhance the fairness of assessment programs, especially those that lack resources to build high-quality items, and among smaller subpopulations where we do not have sufficient sample sizes for traditional DIF analyses.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
532,386
1906.05238
Understanding Vulnerability of Communities in Complex Networks
In this paper, we study the crucial elements of complex networks, namely nodes, and edges and their properties such as their community structure, which play an important role in dictating the robustness of the network towards structural perturbations. Specifically, we want to identify all vital nodes, which when removed would lead to a large change in the underlying community structure of the network. This problem is extremely important because the community structure of a network allows deep underlying insights into how function of a network and its topology affect each other. Moreover, it even provides a way to condense large graphs into smaller graphs where each community acts as a meta node and hence aids in easier network analysis. If this community structure was to be compromised by either accidental or intentional perturbations to the network, that would make such analysis difficult. Since the problem of identifying such vital nodes is computationally intractable, we propose some heuristics that allow us to find solutions close to the optimal solution. To certify the effectiveness of our approach, we first test these heuristics on small networks, and then move to larger networks to show that we achieve similar results. The results reveal that the proposed approaches are effective to analyze the vulnerability of communities in graphs irrespective of their size and scale. From the application point of view, we show that the proposed algorithm is scalable and can be applied to the information diffusion task to curtail the spread of active nodes which was observed empirically. Additionally, we show the performance of our algorithm through an extrinsic evaluation -- we employ two tasks, like prediction and information diffusion, and show that the effect of our algorithm on these tasks is higher than the other baselines.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
134,967
1103.3397
Criterions for locally dense subgraphs
Community detection is one of the most investigated problems in the field of complex networks. Although several methods were proposed, there is still no precise definition of communities. As a step towards a definition, I highlight two necessary properties of communities, separation and internal cohesion, the latter being a new concept. I propose a local method of community detection based on two-dimensional local optimization, which I tested on common benchmarks and on the word association database.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
9,650
2202.01560
Extending turbulence model uncertainty quantification using machine learning
In order to achieve a more virtual design and certification process of jet engines in aviation industry, the uncertainty bounds for computational fluid dynamics have to be known. This work shows the application of a machine learning methodology to quantify the epistemic uncertainties of turbulence models. The underlying method in order to estimate the uncertainty bounds is based on an eigenspace perturbation of the Reynolds stress tensor in combination with random forests.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
278,520
2405.05814
MSDiff: Multi-Scale Diffusion Model for Ultra-Sparse View CT Reconstruction
Computed Tomography (CT) technology reduces radiation haz-ards to the human body through sparse sampling, but fewer sampling angles pose challenges for image reconstruction. Score-based generative models are widely used in sparse-view CT re-construction, performance diminishes significantly with a sharp reduction in projection angles. Therefore, we propose an ultra-sparse view CT reconstruction method utilizing multi-scale dif-fusion models (MSDiff), designed to concentrate on the global distribution of information and facilitate the reconstruction of sparse views with local image characteristics. Specifically, the proposed model ingeniously integrates information from both comprehensive sampling and selectively sparse sampling tech-niques. Through precise adjustments in diffusion model, it is capable of extracting diverse noise distribution, furthering the understanding of the overall structure of images, and aiding the fully sampled model in recovering image information more effec-tively. By leveraging the inherent correlations within the projec-tion data, we have designed an equidistant mask, enabling the model to focus its attention more effectively. Experimental re-sults demonstrated that the multi-scale model approach signifi-cantly improved the quality of image reconstruction under ultra-sparse angles, with good generalization across various datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
453,072
2009.04656
Learning Universal Representations from Word to Sentence
Despite the well-developed cut-edge representation learning for language, most language representation models usually focus on specific level of linguistic unit, which cause great inconvenience when being confronted with handling multiple layers of linguistic objects in a unified way. Thus this work introduces and explores the universal representation learning, i.e., embeddings of different levels of linguistic unit in a uniform vector space through a task-independent evaluation. We present our approach of constructing analogy datasets in terms of words, phrases and sentences and experiment with multiple representation models to examine geometric properties of the learned vector space. Then we empirically verify that well pre-trained Transformer models incorporated with appropriate training settings may effectively yield universal representation. Especially, our implementation of fine-tuning ALBERT on NLI and PPDB datasets achieves the highest accuracy on analogy tasks in different language levels. Further experiments on the insurance FAQ task show effectiveness of universal representation models in real-world applications.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
195,114
1809.01229
t-Exponential Memory Networks for Question-Answering Machines
Recent advances in deep learning have brought to the fore models that can make multiple computational steps in the service of completing a task; these are capable of describ- ing long-term dependencies in sequential data. Novel recurrent attention models over possibly large external memory modules constitute the core mechanisms that enable these capabilities. Our work addresses learning subtler and more complex underlying temporal dynamics in language modeling tasks that deal with sparse sequential data. To this end, we improve upon these recent advances, by adopting concepts from the field of Bayesian statistics, namely variational inference. Our proposed approach consists in treating the network parameters as latent variables with a prior distribution imposed over them. Our statistical assumptions go beyond the standard practice of postulating Gaussian priors. Indeed, to allow for handling outliers, which are prevalent in long observed sequences of multivariate data, multivariate t-exponential distributions are imposed. On this basis, we proceed to infer corresponding posteriors; these can be used for inference and prediction at test time, in a way that accounts for the uncertainty in the available sparse training data. Specifically, to allow for our approach to best exploit the merits of the t-exponential family, our method considers a new t-divergence measure, which generalizes the concept of the Kullback-Leibler divergence. We perform an extensive experimental evaluation of our approach, using challenging language modeling benchmarks, and illustrate its superiority over existing state-of-the-art techniques.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
106,753
1912.02098
Expressiveness and Learning of Hidden Quantum Markov Models
Extending classical probabilistic reasoning using the quantum mechanical view of probability has been of recent interest, particularly in the development of hidden quantum Markov models (HQMMs) to model stochastic processes. However, there has been little progress in characterizing the expressiveness of such models and learning them from data. We tackle these problems by showing that HQMMs are a special subclass of the general class of observable operator models (OOMs) that do not suffer from the \emph{negative probability problem} by design. We also provide a feasible retraction-based learning algorithm for HQMMs using constrained gradient descent on the Stiefel manifold of model parameters. We demonstrate that this approach is faster and scales to larger models than previous learning algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
156,257
2309.07755
Generative AI Text Classification using Ensemble LLM Approaches
Large Language Models (LLMs) have shown impressive performance across a variety of Artificial Intelligence (AI) and natural language processing tasks, such as content creation, report generation, etc. However, unregulated malign application of these models can create undesirable consequences such as generation of fake news, plagiarism, etc. As a result, accurate detection of AI-generated language can be crucial in responsible usage of LLMs. In this work, we explore 1) whether a certain body of text is AI generated or written by human, and 2) attribution of a specific language model in generating a body of text. Texts in both English and Spanish are considered. The datasets used in this study are provided as part of the Automated Text Identification (AuTexTification) shared task. For each of the research objectives stated above, we propose an ensemble neural model that generates probabilities from different pre-trained LLMs which are used as features to a Traditional Machine Learning (TML) classifier following it. For the first task of distinguishing between AI and human generated text, our model ranked in fifth and thirteenth place (with macro $F1$ scores of 0.733 and 0.649) for English and Spanish texts, respectively. For the second task on model attribution, our model ranked in first place with macro $F1$ scores of 0.625 and 0.653 for English and Spanish texts, respectively.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
391,898
1510.02845
A Comparison of MIMO Techniques in Downlink Millimeter Wave Cellular Networks with Hybrid Beamforming
Large antenna arrays will be needed in future millimeter wave (mmWave) cellular networks, enabling a large number of different possible antenna architectures and multiple-input multiple-output (MIMO) techniques. It is still unclear which MIMO technique is most desirable as a function of different network parameters. This paper, therefore, compares the coverage and rate performance of hybrid beamforming enabled multi-user (MU) MIMO and single-user spatial multiplexing (SM) with single-user analog beamforming (SU-BF). A stochastic geometry model for coverage and rate analysis is proposed for MU-MIMO mmWave cellular networks, taking into account important mmWave-specific hardware constraints for hybrid analog/digital precoders and combiners, and a blockage-dependent channel model which is sparse in angular domain. The analytical results highlight the coverage, rate and power consumption tradeoffs in multiuser mmWave networks. With perfect channel state information at the transmitter and round robin scheduling, MU-MIMO is usually a better choice than SM or SU-BF in mmWave cellular networks. This observation, however, neglects any overhead due to channel acquisition or computational complexity. Incorporating the impact of such overheads, our results can be re-interpreted so as to quantify the minimum allowable efficiency of MU-MIMO to provide higher rates than SM or SU-BF.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
47,761
2210.01060
Generating Hidden Markov Models from Process Models Through Nonnegative Tensor Factorization
Monitoring of industrial processes is a critical capability in industry and in government to ensure reliability of production cycles, quick emergency response, and national security. Process monitoring allows users to gauge the progress of an organization in an industrial process or predict the degradation or aging of machine parts in processes taking place at a remote location. Similar to many data science applications, we usually only have access to limited raw data, such as satellite imagery, short video clips, event logs, and signatures captured by a small set of sensors. To combat data scarcity, we leverage the knowledge of Subject Matter Experts (SMEs) who are familiar with the actions of interest. SMEs provide expert knowledge of the essential activities required for task completion and the resources necessary to carry out each of these activities. Various process mining techniques have been developed for this type of analysis; typically such approaches combine theoretical process models built based on domain expert insights with ad-hoc integration of available pieces of raw data. Here, we introduce a novel mathematically sound method that integrates theoretical process models (as proposed by SMEs) with interrelated minimal Hidden Markov Models (HMM), built via nonnegative tensor factorization. Our method consolidates: (a) theoretical process models, (b) HMMs, (c) coupled nonnegative matrix-tensor factorizations, and (d) custom model selection. To demonstrate our methodology and its abilities, we apply it on simple synthetic and real world process models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
321,115
2311.01727
Flexible Error Mitigation of Quantum Processes with Data Augmentation Empowered Neural Model
Neural networks have shown their effectiveness in various tasks in the realm of quantum computing. However, their application in quantum error mitigation, a crucial step towards realizing practical quantum advancements, has been restricted by reliance on noise-free statistics. To tackle this critical challenge, we propose a data augmentation empowered neural model for error mitigation (DAEM). Our model does not require any prior knowledge about the specific noise type and measurement settings and can estimate noise-free statistics solely from the noisy measurement results of the target quantum process, rendering it highly suitable for practical implementation. In numerical experiments, we show the model's superior performance in mitigating various types of noise, including Markovian noise and Non-Markovian noise, compared with previous error mitigation methods. We further demonstrate its versatility by employing the model to mitigate errors in diverse types of quantum processes, including those involving large-scale quantum systems and continuous-variable quantum states. This powerful data augmentation-empowered neural model for error mitigation establishes a solid foundation for realizing more reliable and robust quantum technologies in practical applications.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
405,150
2211.02636
Towards Alzheimer's Disease Progression Assessment: A Review of Machine Learning Methods
Alzheimer's Disease (AD), as the most devastating neurodegenerative disease worldwide, has reached nearly 10 million new cases annually. Current technology provides unprecedented opportunities to study the progression and etiology of this disease with the advanced in imaging techniques. With the recent emergence of a society driven by big data and machine learning (ML), researchers have exerted considerable effort to summarize recent advances in ML-based AD diagnosis. Here, we outline some of the most prevalent and recent ML models for assessing the progression of AD and provide insights on the challenges, opportunities, and future directions that could be advantageous to future research in AD using ML.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
328,642
2304.01534
FedBEVT: Federated Learning Bird's Eye View Perception Transformer in Road Traffic Systems
Bird's eye view (BEV) perception is becoming increasingly important in the field of autonomous driving. It uses multi-view camera data to learn a transformer model that directly projects the perception of the road environment onto the BEV perspective. However, training a transformer model often requires a large amount of data, and as camera data for road traffic are often private, they are typically not shared. Federated learning offers a solution that enables clients to collaborate and train models without exchanging data but model parameters. In this paper, we introduce FedBEVT, a federated transformer learning approach for BEV perception. In order to address two common data heterogeneity issues in FedBEVT: (i) diverse sensor poses, and (ii) varying sensor numbers in perception systems, we propose two approaches -- Federated Learning with Camera-Attentive Personalization (FedCaP) and Adaptive Multi-Camera Masking (AMCM), respectively. To evaluate our method in real-world settings, we create a dataset consisting of four typical federated use cases. Our findings suggest that FedBEVT outperforms the baseline approaches in all four use cases, demonstrating the potential of our approach for improving BEV perception in autonomous driving.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
356,117
2110.00917
Binary code optimization
This article shows that any type of binary data can be defined as a collection from codewords of variable length. This feature helps us to define an Injective and surjective function from the suggested codewords to the required codewords. Therefore, by replacing the new codewords, the binary data becomes another binary data regarding the intended goals. One of these goals is to reduce data size. It means that instead of the original codewords of each binary data, it replaced the Huffman codewords to reduce the data size. One of the features of this method is the result of positive compression for any type of binary data, that is, regardless of the size of the code table, the difference between the original data size and the data size after compression will be greater than or equal to zero. Another important and practical feature of this method is the use of symmetric codewords instead of the suggested codewords in order to create symmetry, reversibility and error resistance properties with two-way decoding.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
258,581
2502.10954
Learning to Stop Overthinking at Test Time
Test time scaling is currently one of the most active research areas that shows promise after training time scaling has reached its limits. Deep-thinking (DT) models are a class of recurrent models that can perform easy-to-hard generalization by assigning more compute to harder test samples. However, due to their inability to determine the complexity of a test sample, DT models have to use a large amount of computation for both easy and hard test samples. Excessive test time computation is wasteful and can cause the ``overthinking'' problem where more test time computation leads to worse results. In this paper, we introduce a test time training method for determining the optimal amount of computation needed for each sample during test time. We also propose Conv-LiGRU, a novel recurrent architecture for efficient and robust visual reasoning. Extensive experiments demonstrate that Conv-LiGRU is more stable than DT, effectively mitigates the ``overthinking'' phenomenon, and achieves superior accuracy.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
534,117
2501.11498
Dialect2SQL: A Novel Text-to-SQL Dataset for Arabic Dialects with a Focus on Moroccan Darija
The task of converting natural language questions (NLQs) into executable SQL queries, known as text-to-SQL, has gained significant interest in recent years, as it enables non-technical users to interact with relational databases. Many benchmarks, such as SPIDER and WikiSQL, have contributed to the development of new models and the evaluation of their performance. In addition, other datasets, like SEDE and BIRD, have introduced more challenges and complexities to better map real-world scenarios. However, these datasets primarily focus on high-resource languages such as English and Chinese. In this work, we introduce Dialect2SQL, the first large-scale, cross-domain text-to-SQL dataset in an Arabic dialect. It consists of 9,428 NLQ-SQL pairs across 69 databases in various domains. Along with SQL-related challenges such as long schemas, dirty values, and complex queries, our dataset also incorporates the complexities of the Moroccan dialect, which is known for its diverse source languages, numerous borrowed words, and unique expressions. This demonstrates that our dataset will be a valuable contribution to both the text-to-SQL community and the development of resources for low-resource languages.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
true
true
525,944
1903.01521
Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs
The Winograd or Cook-Toom class of algorithms help to reduce the overall compute complexity of many modern deep convolutional neural networks (CNNs). Although there has been a lot of research done on model and algorithmic optimization of CNN, little attention has been paid to the efficient implementation of these algorithms on embedded CPUs, which usually have very limited memory and low power budget. This paper aims to fill this gap and focuses on the efficient implementation of Winograd or Cook-Toom based convolution on modern Arm Cortex-A CPUs, widely used in mobile devices today. Specifically, we demonstrate a reduction in inference latency by using a set of optimization strategies that improve the utilization of computational resources, and by effectively leveraging the ARMv8-A NEON SIMD instruction set. We evaluated our proposed region-wise multi-channel implementations on Arm Cortex-A73 platform using several representative CNNs. The results show significant performance improvements in full network, up to 60%, over existing im2row/im2col based optimization techniques
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
123,269
2104.00489
PyVertical: A Vertical Federated Learning Framework for Multi-headed SplitNN
We introduce PyVertical, a framework supporting vertical federated learning using split neural networks. The proposed framework allows a data scientist to train neural networks on data features vertically partitioned across multiple owners while keeping raw data on an owner's device. To link entities shared across different datasets' partitions, we use Private Set Intersection on IDs associated with data points. To demonstrate the validity of the proposed framework, we present the training of a simple dual-headed split neural network for a MNIST classification task, with data samples vertically distributed across two data owners and a data scientist.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
228,017
1707.04974
Adaptive Low-Rank Kernel Subspace Clustering
In this paper, we present a kernel subspace clustering method that can handle non-linear models. In contrast to recent kernel subspace clustering methods which use predefined kernels, we propose to learn a low-rank kernel matrix, with which mapped data in feature space are not only low-rank but also self-expressive. In this manner, the low-dimensional subspace structures of the (implicitly) mapped data are retained and manifested in the high-dimensional feature space. We evaluate the proposed method extensively on both motion segmentation and image clustering benchmarks, and obtain superior results, outperforming the kernel subspace clustering method that uses standard kernels[Patel 2014] and other state-of-the-art linear subspace clustering methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
77,142
2106.15190
DCASE 2021 Task 3: Spectrotemporally-aligned Features for Polyphonic Sound Event Localization and Detection
Sound event localization and detection consists of two subtasks which are sound event detection and direction-of-arrival estimation. While sound event detection mainly relies on time-frequency patterns to distinguish different sound classes, direction-of-arrival estimation uses magnitude or phase differences between microphones to estimate source directions. Therefore, it is often difficult to jointly train these two subtasks simultaneously. We propose a novel feature called spatial cue-augmented log-spectrogram (SALSA) with exact time-frequency mapping between the signal power and the source direction-of-arrival. The feature includes multichannel log-spectrograms stacked along with the estimated direct-to-reverberant ratio and a normalized version of the principal eigenvector of the spatial covariance matrix at each time-frequency bin on the spectrograms. Experimental results on the DCASE 2021 dataset for sound event localization and detection with directional interference showed that the deep learning-based models trained on this new feature outperformed the DCASE challenge baseline by a large margin. We combined several models with slightly different architectures that were trained on the new feature to further improve the system performances for the DCASE sound event localization and detection challenge.
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
243,643
1905.09744
A consistent and comprehensive computational approach for general Fluid-Structure-Contact Interaction problems
We present a consistent approach that allows to solve challenging general nonlinear fluid-structure-contact interaction (FSCI) problems. The underlying continuous formulation includes both "no-slip" fluid-structure interaction as well as frictionless contact between multiple elastic bodies. The respective interface conditions in normal and tangential orientation and especially the role of the fluid stress within the region of closed contact are discussed for the general problem of FSCI. To ensure continuity of the tangential constraints from no-slip to frictionless contact, a transition is enabled by using the general Navier condition with varying slip length. Moreover, the fluid stress in the contact zone is obtained by an extension approach as it plays a crucial role for the lift-off behavior of contacting bodies. With the given continuity of the spatially continuous formulation, continuity of the discrete problem (which is essential for the convergence of Newton's method) is reached naturally. As topological changes of the fluid domain are an inherent challenge in FSCI configurations, a non-interface fitted Cut Finite Element Method (CutFEM) is applied to discretize the fluid domain. All interface conditions, that is the `no-slip' FSI, the general Navier condition, and frictionless contact are incorporated using Nitsche based methods, thus retaining the continuity and consistency of the model. To account for the strong interaction between the fluid and solid discretization, the overall coupled discrete system is solved monolithically. Numerical examples of varying complexity are presented to corroborate the developments.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
131,819
1702.04111
Efficient Algorithms for Moral Lineage Tracing
Lineage tracing, the joint segmentation and tracking of living cells as they move and divide in a sequence of light microscopy images, is a challenging task. Jug et al. have proposed a mathematical abstraction of this task, the moral lineage tracing problem (MLTP), whose feasible solutions define both a segmentation of every image and a lineage forest of cells. Their branch-and-cut algorithm, however, is prone to many cuts and slow convergence for large instances. To address this problem, we make three contributions: (i) we devise the first efficient primal feasible local search algorithms for the MLTP, (ii) we improve the branch-and-cut algorithm by separating tighter cutting planes and by incorporating our primal algorithms, (iii) we show in experiments that our algorithms find accurate solutions on the problem instances of Jug et al. and scale to larger instances, leveraging moral lineage tracing to practical significance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
68,222
1810.00093
Barrier Certificates for Assured Machine Teaching
Machine teaching can be viewed as optimal control for learning. Given a learner's model, machine teaching aims to determine the optimal training data to steer the learner towards a target hypothesis. In this paper, we are interested in providing assurances for machine teaching algorithms using control theory. In particular, we study a well-established learner's model in the machine teaching literature that is captured by the local preference over a version space. We interpret the problem of teaching a preference-based learner as solving a partially observable Markov decision process (POMDP). We then show that the POMDP formulation can be cast as a special hybrid system, i.e., a discrete-time switched system. Subsequently, we use barrier certificates to verify set-theoric properties of this special hybrid system. We show how the computation of the barrier certificate can be decomposed and numerically implemented as the solution to a sum-of-squares (SOS) program. For illustration, we show how the proposed framework based on control theory can be used to verify the teaching performance of two well-known machine teaching methods.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
109,087
2210.05594
Navigating Ensemble Configurations for Algorithmic Fairness
Bias mitigators can improve algorithmic fairness in machine learning models, but their effect on fairness is often not stable across data splits. A popular approach to train more stable models is ensemble learning, but unfortunately, it is unclear how to combine ensembles with mitigators to best navigate trade-offs between fairness and predictive performance. To that end, we built an open-source library enabling the modular composition of 8 mitigators, 4 ensembles, and their corresponding hyperparameters, and we empirically explored the space of configurations on 13 datasets. We distilled our insights from this exploration in the form of a guidance diagram for practitioners that we demonstrate is robust and reproducible.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
322,922
2004.02941
Fingerprint Presentation Attack Detection: A Sensor and Material Agnostic Approach
The vulnerability of automated fingerprint recognition systems to presentation attacks (PA), i.e., spoof or altered fingers, has been a growing concern, warranting the development of accurate and efficient presentation attack detection (PAD) methods. However, one major limitation of the existing PAD solutions is their poor generalization to new PA materials and fingerprint sensors, not used in training. In this study, we propose a robust PAD solution with improved cross-material and cross-sensor generalization. Specifically, we build on top of any CNN-based architecture trained for fingerprint spoof detection combined with cross-material spoof generalization using a style transfer network wrapper. We also incorporate adversarial representation learning (ARL) in deep neural networks (DNN) to learn sensor and material invariant representations for PAD. Experimental results on LivDet 2015 and 2017 public domain datasets exhibit the effectiveness of the proposed approach.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
171,383
2012.12842
Point-Based Value Iteration and Approximately Optimal Dynamic Sensor Selection for Linear-Gaussian Processes
The problem of synthesizing an optimal sensor selection policy is pertinent to a variety of engineering applications ranging from event detection to autonomous navigation. We consider such a synthesis problem over an infinite time horizon with a discounted cost criterion. We formulate this problem in terms of a value iteration over the continuous space of covariance matrices. To obtain a computationally tractable solution, we subsequently formulate an approximate sensor selection problem, which is solvable through a point-based value iteration over a finite "mesh" of covariance matrices with a user-defined bounded trace. We provide theoretical guarantees bounding the suboptimality of the sensor selection policies synthesized through this method and provide numerical examples comparing them to known results.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
213,043
2110.03101
SPEED+: Next-Generation Dataset for Spacecraft Pose Estimation across Domain Gap
Autonomous vision-based spaceborne navigation is an enabling technology for future on-orbit servicing and space logistics missions. While computer vision in general has benefited from Machine Learning (ML), training and validating spaceborne ML models are extremely challenging due to the impracticality of acquiring a large-scale labeled dataset of images of the intended target in the space environment. Existing datasets, such as Spacecraft PosE Estimation Dataset (SPEED), have so far mostly relied on synthetic images for both training and validation, which are easy to mass-produce but fail to resemble the visual features and illumination variability inherent to the target spaceborne images. In order to bridge the gap between the current practices and the intended applications in future space missions, this paper introduces SPEED+: the next generation spacecraft pose estimation dataset with specific emphasis on domain gap. In addition to 60,000 synthetic images for training, SPEED+ includes 9,531 hardware-in-the-loop images of a spacecraft mockup model captured from the Testbed for Rendezvous and Optical Navigation (TRON) facility. TRON is a first-of-a-kind robotic testbed capable of capturing an arbitrary number of target images with accurate and maximally diverse pose labels and high-fidelity spaceborne illumination conditions. SPEED+ is used in the second international Satellite Pose Estimation Challenge co-hosted by SLAB and the Advanced Concepts Team of the European Space Agency to evaluate and compare the robustness of spaceborne ML models trained on synthetic images.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
259,364
2311.10792
Enhancing Data Efficiency and Feature Identification for Lithium-Ion Battery Lifespan Prediction by Deciphering Interpretation of Temporal Patterns and Cyclic Variability Using Attention-Based Models
Accurately predicting the lifespan of lithium-ion batteries is crucial for optimizing operational strategies and mitigating risks. While numerous studies have aimed at predicting battery lifespan, few have examined the interpretability of their models or how such insights could improve predictions. Addressing this gap, we introduce three innovative models that integrate shallow attention layers into a foundational model from our previous work, which combined elements of recurrent and convolutional neural networks. Utilizing a well-known public dataset, we showcase our methodology's effectiveness. Temporal attention is applied to identify critical timesteps and highlight differences among test cell batches, particularly underscoring the significance of the "rest" phase. Furthermore, by applying cyclic attention via self-attention to context vectors, our approach effectively identifies key cycles, enabling us to strategically decrease the input size for quicker predictions. Employing both single- and multi-head attention mechanisms, we have systematically minimized the required input from 100 to 50 and then to 30 cycles, refining this process based on cyclic attention scores. Our refined model exhibits strong regression capabilities, accurately forecasting the initiation of rapid capacity fade with an average deviation of only 58 cycles by analyzing just the initial 30 cycles of easily accessible input data.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
408,662
2204.01732
A high-order tensor completion algorithm based on Fully-Connected Tensor Network weighted optimization
Tensor completion aimes at recovering missing data, and it is one of the popular concerns in deep learning and signal processing. Among the higher-order tensor decomposition algorithms, the recently proposed fully-connected tensor network decomposition (FCTN) algorithm is the most advanced. In this paper, by leveraging the superior expression of the fully-connected tensor network (FCTN) decomposition, we propose a new tensor completion method named the fully connected tensor network weighted optization(FCTN-WOPT). The algorithm performs a composition of the completed tensor by initialising the factors from the FCTN decomposition. We build a loss function with the weight tensor, the completed tensor and the incomplete tensor together, and then update the completed tensor using the lbfgs gradient descent algorithm to reduce the spatial memory occupation and speed up iterations. Finally we test the completion with synthetic data and real data (both image data and video data) and the results show the advanced performance of our FCTN-WOPT when it is applied to higher-order tensor completion.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
289,715
2112.05498
Sparse Depth Completion with Semantic Mesh Deformation Optimization
Sparse depth measurements are widely available in many applications such as augmented reality, visual inertial odometry and robots equipped with low cost depth sensors. Although such sparse depth samples work well for certain applications like motion tracking, a complete depth map is usually preferred for broader applications, such as 3D object recognition, 3D reconstruction and autonomous driving. Despite the recent advancements in depth prediction from single RGB images with deeper neural networks, the existing approaches do not yield reliable results for practical use. In this work, we propose a neural network with post-optimization, which takes an RGB image and sparse depth samples as input and predicts the complete depth map. We make three major contributions to advance the state-of-the-art: an improved backbone network architecture named EDNet, a semantic edge-weighted loss function and a semantic mesh deformation optimization method. Our evaluation results outperform the existing work consistently on both indoor and outdoor datasets, and it significantly reduces the mean average error by up to 19.5% under the same settings of 200 sparse samples on NYU-Depth-V2 dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
270,863
2407.09121
Refuse Whenever You Feel Unsafe: Improving Safety in LLMs via Decoupled Refusal Training
This study addresses a critical gap in safety tuning practices for Large Language Models (LLMs) by identifying and tackling a refusal position bias within safety tuning data, which compromises the models' ability to appropriately refuse generating unsafe content. We introduce a novel approach, Decoupled Refusal Training (DeRTa), designed to empower LLMs to refuse compliance to harmful prompts at any response position, significantly enhancing their safety capabilities. DeRTa incorporates two novel components: (1) Maximum Likelihood Estimation (MLE) with Harmful Response Prefix, which trains models to recognize and avoid unsafe content by appending a segment of harmful response to the beginning of a safe response, and (2) Reinforced Transition Optimization (RTO), which equips models with the ability to transition from potential harm to safety refusal consistently throughout the harmful response sequence. Our empirical evaluation, conducted using LLaMA3 and Mistral model families across six attack scenarios, demonstrates that our method not only improves model safety without compromising performance but also surpasses well-known models such as GPT-4 in defending against attacks. Importantly, our approach successfully defends recent advanced attack methods (e.g., CodeAttack) that have jailbroken GPT-4 and LLaMA3-70B-Instruct. Our code and data can be found at https://github.com/RobustNLP/DeRTa.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
472,462
2005.00789
Is Multihop QA in DiRe Condition? Measuring and Reducing Disconnected Reasoning
Has there been real progress in multi-hop question-answering? Models often exploit dataset artifacts to produce correct answers, without connecting information across multiple supporting facts. This limits our ability to measure true progress and defeats the purpose of building multi-hop QA datasets. We make three contributions towards addressing this. First, we formalize such undesirable behavior as disconnected reasoning across subsets of supporting facts. This allows developing a model-agnostic probe for measuring how much any model can cheat via disconnected reasoning. Second, using a notion of \emph{contrastive support sufficiency}, we introduce an automatic transformation of existing datasets that reduces the amount of disconnected reasoning. Third, our experiments suggest that there hasn't been much progress in multi-hop QA in the reading comprehension setting. For a recent large-scale model (XLNet), we show that only 18 points out of its answer F1 score of 72 on HotpotQA are obtained through multifact reasoning, roughly the same as that of a simpler RNN baseline. Our transformation substantially reduces disconnected reasoning (19 points in answer F1). It is complementary to adversarial approaches, yielding further reductions in conjunction.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
175,375
2312.12037
Founder-GPT: Self-play to evaluate the Founder-Idea fit
This research introduces an innovative evaluation method for the "founder-idea" fit in early-stage startups, utilizing advanced large language model techniques to assess founders' profiles against their startup ideas to enhance decision-making. Embeddings, self-play, tree-of-thought, and critique-based refinement techniques show early promising results that each idea's success patterns are unique and they should be evaluated based on the context of the founder's background.
false
true
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
416,823
2307.02616
Federated Epidemic Surveillance
Epidemic surveillance is a challenging task, especially when crucial data is fragmented across institutions and data custodians are unable or unwilling to share it. This study aims to explore the feasibility of a simple federated surveillance approach. The idea is to conduct hypothesis tests for a rise in counts behind each custodian's firewall and then combine p-values from these tests using techniques from meta-analysis. We propose a hypothesis testing framework to identify surges in epidemic-related data streams and conduct experiments on real and semi-synthetic data to assess the power of different p-value combination methods to detect surges without needing to combine the underlying counts. Our findings show that relatively simple combination methods achieve a high degree of fidelity and suggest that infectious disease outbreaks can be detected without needing to share even aggregate data across institutions.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
377,745
1911.09326
LCD: Learned Cross-Domain Descriptors for 2D-3D Matching
In this work, we present a novel method to learn a local cross-domain descriptor for 2D image and 3D point cloud matching. Our proposed method is a dual auto-encoder neural network that maps 2D and 3D input into a shared latent space representation. We show that such local cross-domain descriptors in the shared embedding are more discriminative than those obtained from individual training in 2D and 3D domains. To facilitate the training process, we built a new dataset by collecting $\approx 1.4$ millions of 2D-3D correspondences with various lighting conditions and settings from publicly available RGB-D scenes. Our descriptor is evaluated in three main experiments: 2D-3D matching, cross-domain retrieval, and sparse-to-dense depth estimation. Experimental results confirm the robustness of our approach as well as its competitive performance not only in solving cross-domain tasks but also in being able to generalize to solve sole 2D and 3D tasks. Our dataset and code are released publicly at \url{https://hkust-vgd.github.io/lcd}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
154,485
2403.09863
Towards White Box Deep Learning
Deep neural networks learn fragile "shortcut" features, rendering them difficult to interpret (black box) and vulnerable to adversarial attacks. This paper proposes semantic features as a general architectural solution to this problem. The main idea is to make features locality-sensitive in the adequate semantic topology of the domain, thus introducing a strong regularization. The proof of concept network is lightweight, inherently interpretable and achieves almost human-level adversarial test metrics - with no adversarial training! These results and the general nature of the approach warrant further research on semantic features. The code is available at https://github.com/314-Foundation/white-box-nn
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
437,939
2408.10798
Universal Novelty Detection Through Adaptive Contrastive Learning
Novelty detection is a critical task for deploying machine learning models in the open world. A crucial property of novelty detection methods is universality, which can be interpreted as generalization across various distributions of training or test data. More precisely, for novelty detection, distribution shifts may occur in the training set or the test set. Shifts in the training set refer to cases where we train a novelty detector on a new dataset and expect strong transferability. Conversely, distribution shifts in the test set indicate the methods' performance when the trained model encounters a shifted test sample. We experimentally show that existing methods falter in maintaining universality, which stems from their rigid inductive biases. Motivated by this, we aim for more generalized techniques that have more adaptable inductive biases. In this context, we leverage the fact that contrastive learning provides an efficient framework to easily switch and adapt to new inductive biases through the proper choice of augmentations in forming the negative pairs. We propose a novel probabilistic auto-negative pair generation method AutoAugOOD, along with contrastive learning, to yield a universal novelty detector method. Our experiments demonstrate the superiority of our method under different distribution shifts in various image benchmark datasets. Notably, our method emerges universality in the lens of adaptability to different setups of novelty detection, including one-class, unlabeled multi-class, and labeled multi-class settings. Code: https://github.com/mojtaba-nafez/UNODE
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
482,028
cs/0701072
Tagging, Folksonomy & Co - Renaissance of Manual Indexing?
This paper gives an overview of current trends in manual indexing on the Web. Along with a general rise of user generated content there are more and more tagging systems that allow users to annotate digital resources with tags (keywords) and share their annotations with other users. Tagging is frequently seen in contrast to traditional knowledge organization systems or as something completely new. This paper shows that tagging should better be seen as a popular form of manual indexing on the Web. Difference between controlled and free indexing blurs with sufficient feedback mechanisms. A revised typology of tagging systems is presented that includes different user roles and knowledge organization systems with hierarchical relationships and vocabulary control. A detailed bibliography of current research in collaborative tagging is included.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
540,047
2208.14503
Cost-Efficient Deployment of a Reliable Multi-UAV Unmanned Aerial System
In this work, we study the trade-off between the reliability and the investment cost of an unmanned aerial system (UAS) consisting of a set of unmanned aerial vehicles (UAVs) carrying radio access nodes, called portable access points (PAPs)), deployed to serve a set of ground nodes (GNs). Using the proposed algorithm, a given geographical region is equivalently represented as a set of circular regions, where each circle represents the coverage region of a PAP. Then, the steady-state availability of the UAS is analytically derived by modelling it as a continuous time birth-death Markov decision process (MDP). Numerical evaluations show that the investment cost to guarantee a given steady-state availability to a set of GNs can be reduced by considering the traffic demand and distribution of GNs.
false
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
315,336
2302.12288
ZoeDepth: Zero-shot Transfer by Combining Relative and Metric Depth
This paper tackles the problem of depth estimation from a single image. Existing work either focuses on generalization performance disregarding metric scale, i.e. relative depth estimation, or state-of-the-art results on specific datasets, i.e. metric depth estimation. We propose the first approach that combines both worlds, leading to a model with excellent generalization performance while maintaining metric scale. Our flagship model, ZoeD-M12-NK, is pre-trained on 12 datasets using relative depth and fine-tuned on two datasets using metric depth. We use a lightweight head with a novel bin adjustment design called metric bins module for each domain. During inference, each input image is automatically routed to the appropriate head using a latent classifier. Our framework admits multiple configurations depending on the datasets used for relative depth pre-training and metric fine-tuning. Without pre-training, we can already significantly improve the state of the art (SOTA) on the NYU Depth v2 indoor dataset. Pre-training on twelve datasets and fine-tuning on the NYU Depth v2 indoor dataset, we can further improve SOTA for a total of 21% in terms of relative absolute error (REL). Finally, ZoeD-M12-NK is the first model that can jointly train on multiple datasets (NYU Depth v2 and KITTI) without a significant drop in performance and achieve unprecedented zero-shot generalization performance to eight unseen datasets from both indoor and outdoor domains. The code and pre-trained models are publicly available at https://github.com/isl-org/ZoeDepth .
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
347,504
1412.6572
Explaining and Harnessing Adversarial Examples
Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
38,659
2409.17189
Decentralized Federated Learning with Gradient Tracking over Time-Varying Directed Networks
We investigate the problem of agent-to-agent interaction in decentralized (federated) learning over time-varying directed graphs, and, in doing so, propose a consensus-based algorithm called DSGTm-TV. The proposed algorithm incorporates gradient tracking and heavy-ball momentum to distributively optimize a global objective function, while preserving local data privacy. Under DSGTm-TV, agents will update local model parameters and gradient estimates using information exchange with neighboring agents enabled through row- and column-stochastic mixing matrices, which we show guarantee both consensus and optimality. Our analysis establishes that DSGTm-TV exhibits linear convergence to the exact global optimum when exact gradient information is available, and converges in expectation to a neighborhood of the global optimum when employing stochastic gradients. Moreover, in contrast to existing methods, DSGTm-TV preserves convergence for networks with uncoordinated stepsizes and momentum parameters, for which we provide explicit bounds. These results enable agents to operate in a fully decentralized manner, independently optimizing their local hyper-parameters. We demonstrate the efficacy of our approach via comparisons with state-of-the-art baselines on real-world image classification and natural language processing tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
491,697
2204.04867
Structured Graph Variational Autoencoders for Indoor Furniture layout Generation
We present a structured graph variational autoencoder for generating the layout of indoor 3D scenes. Given the room type (e.g., living room or library) and the room layout (e.g., room elements such as floor and walls), our architecture generates a collection of objects (e.g., furniture items such as sofa, table and chairs) that is consistent with the room type and layout. This is a challenging problem because the generated scene should satisfy multiple constrains, e.g., each object must lie inside the room and two objects cannot occupy the same volume. To address these challenges, we propose a deep generative model that encodes these relationships as soft constraints on an attributed graph (e.g., the nodes capture attributes of room and furniture elements, such as class, pose and size, and the edges capture geometric relationships such as relative orientation). The architecture consists of a graph encoder that maps the input graph to a structured latent space, and a graph decoder that generates a furniture graph, given a latent code and the room graph. The latent space is modeled with auto-regressive priors, which facilitates the generation of highly structured scenes. We also propose an efficient training procedure that combines matching and constrained learning. Experiments on the 3D-FRONT dataset show that our method produces scenes that are diverse and are adapted to the room layout.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
290,819
2401.16011
GPS: Graph Contrastive Learning via Multi-scale Augmented Views from Adversarial Pooling
Self-supervised graph representation learning has recently shown considerable promise in a range of fields, including bioinformatics and social networks. A large number of graph contrastive learning approaches have shown promising performance for representation learning on graphs, which train models by maximizing agreement between original graphs and their augmented views (i.e., positive views). Unfortunately, these methods usually involve pre-defined augmentation strategies based on the knowledge of human experts. Moreover, these strategies may fail to generate challenging positive views to provide sufficient supervision signals. In this paper, we present a novel approach named Graph Pooling ContraSt (GPS) to address these issues. Motivated by the fact that graph pooling can adaptively coarsen the graph with the removal of redundancy, we rethink graph pooling and leverage it to automatically generate multi-scale positive views with varying emphasis on providing challenging positives and preserving semantics, i.e., strongly-augmented view and weakly-augmented view. Then, we incorporate both views into a joint contrastive learning framework with similarity learning and consistency learning, where our pooling module is adversarially trained with respect to the encoder for adversarial robustness. Experiments on twelve datasets on both graph classification and transfer learning tasks verify the superiority of the proposed method over its counterparts.
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
false
false
424,680
2410.10542
Rethinking Legal Judgement Prediction in a Realistic Scenario in the Era of Large Language Models
This study investigates judgment prediction in a realistic scenario within the context of Indian judgments, utilizing a range of transformer-based models, including InLegalBERT, BERT, and XLNet, alongside LLMs such as Llama-2 and GPT-3.5 Turbo. In this realistic scenario, we simulate how judgments are predicted at the point when a case is presented for a decision in court, using only the information available at that time, such as the facts of the case, statutes, precedents, and arguments. This approach mimics real-world conditions, where decisions must be made without the benefit of hindsight, unlike retrospective analyses often found in previous studies. For transformer models, we experiment with hierarchical transformers and the summarization of judgment facts to optimize input for these models. Our experiments with LLMs reveal that GPT-3.5 Turbo excels in realistic scenarios, demonstrating robust performance in judgment prediction. Furthermore, incorporating additional legal information, such as statutes and precedents, significantly improves the outcome of the prediction task. The LLMs also provide explanations for their predictions. To evaluate the quality of these predictions and explanations, we introduce two human evaluation metrics: Clarity and Linking. Our findings from both automatic and human evaluations indicate that, despite advancements in LLMs, they are yet to achieve expert-level performance in judgment prediction and explanation tasks.
false
false
false
false
true
true
true
false
true
false
false
false
false
false
false
false
false
false
498,125
1409.2329
Recurrent Neural Network Regularization
We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
35,898