id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2205.10620
GNN-Enhanced Approximate Message Passing for Massive/Ultra-Massive MIMO Detection
Efficient massive/ultra-massive multiple-input multiple-output (MIMO) detection algorithms with satisfactory performance and low complexity are critical to meet the high throughput and ultra-low latency requirements in 5G and beyond communications, given the extremely large number of antennas. In this paper, we propose a low-complexity graph neural network (GNN) enhanced approximate message passing (AMP) algorithm, AMP-GNN, for massive/ultra-massive MIMO detection. The structure of the neural network is customized by unfolding the AMP algorithm and introducing the GNN module for multiuser interference cancellation. Numerical results will show that the proposed AMP-GNN significantly improves the performance of the AMP detector and achieves comparable performance as the state-of-the-art deep learning-based MIMO detectors but with reduced computational complexity. Furthermore, it presents strong robustness to the change of the number of users.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
297,783
2308.08235
The Expressive Power of Graph Neural Networks: A Survey
Graph neural networks (GNNs) are effective machine learning models for many graph-related applications. Despite their empirical success, many research efforts focus on the theoretical limitations of GNNs, i.e., the GNNs expressive power. Early works in this domain mainly focus on studying the graph isomorphism recognition ability of GNNs, and recent works try to leverage the properties such as subgraph counting and connectivity learning to characterize the expressive power of GNNs, which are more practical and closer to real-world. However, no survey papers and open-source repositories comprehensively summarize and discuss models in this important direction. To fill the gap, we conduct a first survey for models for enhancing expressive power under different forms of definition. Concretely, the models are reviewed based on three categories, i.e., Graph feature enhancement, Graph topology enhancement, and GNNs architecture enhancement.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
385,827
2301.11120
Bayesian Detection of Mesoscale Structures in Pathway Data on Graphs
Mesoscale structures are an integral part of the abstraction and analysis of complex systems. They reveal a node's function in the network, and facilitate our understanding of the network dynamics. For example, they can represent communities in social or citation networks, roles in corporate interactions, or core-periphery structures in transportation networks. We usually detect mesoscale structures under the assumption of independence of interactions. Still, in many cases, the interactions invalidate this assumption by occurring in a specific order. Such patterns emerge in pathway data; to capture them, we have to model the dependencies between interactions using higher-order network models. However, the detection of mesoscale structures in higher-order networks is still under-researched. In this work, we derive a Bayesian approach that simultaneously models the optimal partitioning of nodes in groups and the optimal higher-order network dynamics between the groups. In synthetic data we demonstrate that our method can recover both standard proximity-based communities and role-based groupings of nodes. In synthetic and real world data we show that it can compete with baseline techniques, while additionally providing interpretable abstractions of network dynamics.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
342,025
1812.03595
PoseFix: Model-agnostic General Human Pose Refinement Network
Multi-person pose estimation from a 2D image is an essential technique for human behavior understanding. In this paper, we propose a human pose refinement network that estimates a refined pose from a tuple of an input image and input pose. The pose refinement was performed mainly through an end-to-end trainable multi-stage architecture in previous methods. However, they are highly dependent on pose estimation models and require careful model design. By contrast, we propose a model-agnostic pose refinement method. According to a recent study, state-of-the-art 2D human pose estimation methods have similar error distributions. We use this error statistics as prior information to generate synthetic poses and use the synthesized poses to train our model. In the testing stage, pose estimation results of any other methods can be input to the proposed method. Moreover, the proposed model does not require code or knowledge about other methods, which allows it to be easily used in the post-processing step. We show that the proposed approach achieves better performance than the conventional multi-stage refinement models and consistently improves the performance of various state-of-the-art pose estimation methods on the commonly used benchmark. The code is available in this https URL\footnote{\url{https://github.com/mks0601/PoseFix_RELEASE}}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
116,056
2412.04117
MVUDA: Unsupervised Domain Adaptation for Multi-view Pedestrian Detection
We address multi-view pedestrian detection in a setting where labeled data is collected using a multi-camera setup different from the one used for testing. While recent multi-view pedestrian detectors perform well on the camera rig used for training, their performance declines when applied to a different setup. To facilitate seamless deployment across varied camera rigs, we propose an unsupervised domain adaptation (UDA) method that adapts the model to new rigs without requiring additional labeled data. Specifically, we leverage the mean teacher self-training framework with a novel pseudo-labeling technique tailored to multi-view pedestrian detection. This method achieves state-of-the-art performance on multiple benchmarks, including MultiviewX$\rightarrow$Wildtrack. Unlike previous methods, our approach eliminates the need for external labeled monocular datasets, thereby reducing reliance on labeled data. Extensive evaluations demonstrate the effectiveness of our method and validate key design choices. By enabling robust adaptation across camera setups, our work enhances the practicality of multi-view pedestrian detectors and establishes a strong UDA baseline for future research.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
514,267
2109.11018
Making Human-Like Trade-offs in Constrained Environments by Learning from Demonstrations
Many real-life scenarios require humans to make difficult trade-offs: do we always follow all the traffic rules or do we violate the speed limit in an emergency? These scenarios force us to evaluate the trade-off between collective norms and our own personal objectives. To create effective AI-human teams, we must equip AI agents with a model of how humans make trade-offs in complex, constrained environments. These agents will be able to mirror human behavior or to draw human attention to situations where decision making could be improved. To this end, we propose a novel inverse reinforcement learning (IRL) method for learning implicit hard and soft constraints from demonstrations, enabling agents to quickly adapt to new settings. In addition, learning soft constraints over states, actions, and state features allows agents to transfer this knowledge to new domains that share similar aspects. We then use the constraint learning method to implement a novel system architecture that leverages a cognitive model of human decision making, multi-alternative decision field theory (MDFT), to orchestrate competing objectives. We evaluate the resulting agent on trajectory length, number of violated constraints, and total reward, demonstrating that our agent architecture is both general and achieves strong performance. Thus we are able to capture and replicate human-like trade-offs from demonstrations in environments when constraints are not explicit.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
256,813
2207.13976
Federated Learning for IoUT: Concepts, Applications, Challenges and Opportunities
Internet of Underwater Things (IoUT) have gained rapid momentum over the past decade with applications spanning from environmental monitoring and exploration, defence applications, etc. The traditional IoUT systems use machine learning (ML) approaches which cater the needs of reliability, efficiency and timeliness. However, an extensive review of the various studies conducted highlight the significance of data privacy and security in IoUT frameworks as a predominant factor in achieving desired outcomes in mission critical applications. Federated learning (FL) is a secured, decentralized framework which is a recent development in machine learning, that will help in fulfilling the challenges faced by conventional ML approaches in IoUT. This paper presents an overview of the various applications of FL in IoUT, its challenges, open issues and indicates direction of future research prospects.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
310,456
1910.09405
Hyperspectral Image Classification Based on Adaptive Sparse Deep Network
Sparse model is widely used in hyperspectral image classification.However, different of sparsity and regularization parameters has great influence on the classification results.In this paper, a novel adaptive sparse deep network based on deep architecture is proposed, which can construct the optimal sparse representation and regularization parameters by deep network.Firstly, a data flow graph is designed to represent each update iteration based on Alternating Direction Method of Multipliers (ADMM) algorithm.Forward network and Back-Propagation network are deduced.All parameters are updated by gradient descent in Back-Propagation.Then we proposed an Adaptive Sparse Deep Network.Comparing with several traditional classifiers or other algorithm for sparse model, experiment results indicate that our method achieves great improvement in HSI classification.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
150,184
2405.04781
CourseGPT-zh: an Educational Large Language Model Based on Knowledge Distillation Incorporating Prompt Optimization
Large language models (LLMs) have demonstrated astonishing capabilities in natural language processing (NLP) tasks, sparking interest in their application to professional domains with higher specialized requirements. However, restricted access to closed-source LLMs via APIs and the difficulty in collecting massive high-quality datasets pose obstacles to the development of large language models in education fields of various courses. Given these challenges, we propose CourseGPT-zh, a course-oriented education LLM that supports customization and low-cost deployment. To address the comprehensiveness and diversity requirements of course-specific corpora, we design a high-quality question-answering corpus distillation framework incorporating prompt optimization, which effectively mines textbook knowledge and enhances its diversity. Moreover, considering the alignment of LLM responses with user needs, a novel method for discrete prompt optimization based on LLM-as-Judge is introduced. During optimization, this framework leverages the LLM's ability to reflect on and exploit error feedback and patterns, allowing for prompts that meet user needs and preferences while saving response length. Lastly, we obtain CourseGPT-zh based on the open-source LLM using parameter-efficient fine-tuning. Experimental results show that our discrete prompt optimization framework effectively improves the response quality of ChatGPT, and CourseGPT-zh exhibits strong professional capabilities in specialized knowledge question-answering, significantly outperforming comparable open-source models.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
452,678
1703.08030
Indoor Office Wideband Penetration Loss Measurements at 73 GHz
This paper presents millimeter wave (mmWave) penetration loss measurements and analysis at 73 GHz using a wideband sliding correlator channel sounder in an indoor office environment. Penetration loss was measured using a carefully controlled measurement setup for many common indoor building materials such as glass doors, glass windows, closet doors, steel doors, and whiteboard writing walls. Measurements were conducted using narrowbeam transmitter (TX) and receiver (RX) horn antennas that were boresight-aligned with a test material between the antennas. Overall, 21 different locations were measured for 6 different materials such that the same type of material was tested in at least two locations in order to characterize the effect of penetration loss for materials with similar composition. As shown here, attenuation through common materials ranged between 0.8 dB/cm and 9.9 dB/cm for co-polarized antennas, while cross-polarized antennas exhibited similar attenuation for most materials, but up to 23.4 dB/cm of attenuation for others. The penetration loss results presented here are useful for site-specific planning tools that will model indoor mmWave networks, without the need for expensive measurement campaigns.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
70,506
2402.08749
Automated detection of motion artifacts in brain MR images using deep learning and explainable artificial intelligence
Quality assessment, including inspecting the images for artifacts, is a critical step during MRI data acquisition to ensure data quality and downstream analysis or interpretation success. This study demonstrates a deep learning model to detect rigid motion in T1-weighted brain images. We leveraged a 2D CNN for three-class classification and tested it on publicly available retrospective and prospective datasets. Grad-CAM heatmaps enabled the identification of failure modes and provided an interpretation of the model's results. The model achieved average precision and recall metrics of 85% and 80% on six motion-simulated retrospective datasets. Additionally, the model's classifications on the prospective dataset showed a strong inverse correlation (-0.84) compared to average edge strength, an image quality metric indicative of motion. This model is part of the ArtifactID tool, aimed at inline automatic detection of Gibbs ringing, wrap-around, and motion artifacts. This tool automates part of the time-consuming QA process and augments expertise on-site, particularly relevant in low-resource settings where local MR knowledge is scarce.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
429,215
2302.08145
Analysis of d-ary Tree Algorithms with Successive Interference Cancellation
In this article, we calculate the mean throughput, number of collisions, successes, and idle slots for random tree algorithms with successive interference cancellation. Except for the case of the throughput for the binary tree, all the results are new. We furthermore disprove the claim that only the binary tree maximises throughput. Our method works with many observables and can be used as a blueprint for further analysis.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
345,958
2006.09073
Mucko: Multi-Layer Cross-Modal Knowledge Reasoning for Fact-based Visual Question Answering
Fact-based Visual Question Answering (FVQA) requires external knowledge beyond visible content to answer questions about an image, which is challenging but indispensable to achieve general VQA. One limitation of existing FVQA solutions is that they jointly embed all kinds of information without fine-grained selection, which introduces unexpected noises for reasoning the final answer. How to capture the question-oriented and information-complementary evidence remains a key challenge to solve the problem. In this paper, we depict an image by a multi-modal heterogeneous graph, which contains multiple layers of information corresponding to the visual, semantic and factual features. On top of the multi-layer graph representations, we propose a modality-aware heterogeneous graph convolutional network to capture evidence from different layers that is most relevant to the given question. Specifically, the intra-modal graph convolution selects evidence from each modality and cross-modal graph convolution aggregates relevant information across different modalities. By stacking this process multiple times, our model performs iterative reasoning and predicts the optimal answer by analyzing all question-oriented evidence. We achieve a new state-of-the-art performance on the FVQA task and demonstrate the effectiveness and interpretability of our model with extensive experiments.
false
false
false
false
true
false
true
false
true
false
false
true
false
false
false
false
false
false
182,413
1704.03273
Simultaneous Stereo Video Deblurring and Scene Flow Estimation
Videos for outdoor scene often show unpleasant blur effects due to the large relative motion between the camera and the dynamic objects and large depth variations. Existing works typically focus monocular video deblurring. In this paper, we propose a novel approach to deblurring from stereo videos. In particular, we exploit the piece-wise planar assumption about the scene and leverage the scene flow information to deblur the image. Unlike the existing approach [31] which used a pre-computed scene flow, we propose a single framework to jointly estimate the scene flow and deblur the image, where the motion cues from scene flow estimation and blur information could reinforce each other, and produce superior results than the conventional scene flow estimation or stereo deblurring methods. We evaluate our method extensively on two available datasets and achieve significant improvement in flow estimation and removing the blur effect over the state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
71,599
2106.02118
A Prospective Observational Study to Investigate Performance of a Chest X-ray Artificial Intelligence Diagnostic Support Tool Across 12 U.S. Hospitals
Importance: An artificial intelligence (AI)-based model to predict COVID-19 likelihood from chest x-ray (CXR) findings can serve as an important adjunct to accelerate immediate clinical decision making and improve clinical decision making. Despite significant efforts, many limitations and biases exist in previously developed AI diagnostic models for COVID-19. Utilizing a large set of local and international CXR images, we developed an AI model with high performance on temporal and external validation. Conclusions and Relevance: AI-based diagnostic tools may serve as an adjunct, but not replacement, for clinical decision support of COVID-19 diagnosis, which largely hinges on exposure history, signs, and symptoms. While AI-based tools have not yet reached full diagnostic potential in COVID-19, they may still offer valuable information to clinicians taken into consideration along with clinical signs and symptoms.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
238,732
2306.02557
Detecting individual-level infections using sparse group-testing through graph-coupled hidden Markov models
Identifying the infection status of each individual during infectious diseases informs public health management. However, performing frequent individual-level tests may not be feasible. Instead, sparse and sometimes group-level tests are performed. Determining the infection status of individuals using sparse group-level tests remains an open problem. We have tackled this problem by extending graph-coupled hidden Markov models with individuals infection statuses as the hidden states and the group test results as the observations. We fitted the model to simulation datasets using the Gibbs sampling method. The model performed about 0.55 AUC for low testing frequencies and increased to 0.80 AUC in the case where the groups were tested every day. The model was separately tested on a daily basis case to predict the statuses over time and after 15 days of the beginning of the spread, which resulted in 0.98 AUC at day 16 and remained above 0.80 AUC until day 128. Therefore, although dealing with sparse tests remains unsolved, the results open the possibility of using initial group screenings during pandemics to accurately estimate individuals infection statuses.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
370,947
2102.08014
Representing Hierarchical Structure by Using Cone Embedding
Graph embedding is becoming an important method with applications in various areas, including social networks and knowledge graph completion. In particular, Poincar\'e embedding has been proposed to capture the hierarchical structure of graphs, and its effectiveness has been reported. However, most of the existing methods have isometric mappings in the embedding space, and the choice of the origin point can be arbitrary. This fact is not desirable when the distance from the origin is used as an indicator of hierarchy, as in the case of Poincar\'e embedding. In this paper, we propose cone embedding, embedding method in a metric cone, which solve these problems, and we gain further benefits: 1) we provide an indicator of hierarchical information that is both geometrically and intuitively natural to interpret, and 2) we can extract the hierarchical structure from a graph embedding output of other methods by learning additional one-dimensional parameters.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
220,315
2312.05557
Long-Term Rate-Fairness-Aware Beamforming Based Massive MIMO Systems
This is the first treatise on multi-user (MU) beamforming designed for achieving long-term rate-fairness in fulldimensional MU massive multi-input multi-output (m-MIMO) systems. Explicitly, based on the channel covariances, which can be assumed to be known beforehand, we address this problem by optimizing the following objective functions: the users' signal-toleakage-noise ratios (SLNRs) using SLNR max-min optimization, geometric mean of SLNRs (GM-SLNR) based optimization, and SLNR soft max-min optimization. We develop a convex-solver based algorithm, which invokes a convex subproblem of cubic time-complexity at each iteration for solving the SLNR maxmin problem. We then develop closed-form expression based algorithms of scalable complexity for the solution of the GMSLNR and of the SLNR soft max-min problem. The simulations provided confirm the users' improved-fairness ergodic rate distributions.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
414,137
2310.04836
Dual Grained Quantization: Efficient Fine-Grained Quantization for LLM
Large Language Models (LLMs) pose significant hardware challenges related to memory requirements and computational ability. There are two mainstream quantization schemes for LLMs: coarse-grained ($\textit{e.g.,}$ channel-wise) quantization and fine-grained ($\textit{e.g.,}$ group-wise) quantization. Fine-grained quantization has smaller quantization loss, consequently achieving superior performance. However, when applied to weight-activation quantization, it disrupts continuous integer matrix multiplication, leading to inefficient inference. In this paper, we introduce Dual Grained Quantization (DGQ), a novel A8W4 quantization for LLM that maintains superior performance while ensuring fast inference speed. DSQ dequantizes the fine-grained INT4 weight into coarse-grained INT8 representation and preform matrix multiplication using INT8 kernels. Besides, we develop a two-phase grid search algorithm to simplify the determination of fine-grained and coarse-grained quantization scales. We also devise a percentile clipping schema for smoothing the activation outliers without the need for complex optimization techniques. Experimental results demonstrate that DGQ consistently outperforms prior methods across various LLM architectures and a wide range of tasks. Remarkably, by our implemented efficient CUTLASS kernel, we achieve $\textbf{1.12}$ $\times$ memory reduction and $\textbf{3.24}$ $\times$ speed gains comparing A16W4 implementation. These advancements enable efficient deployment of A8W4 LLMs for real-world applications.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
397,845
1802.02976
A mixed finite element for weakly-symmetric elasticity
We develop a finite element discretization for the weakly symmetric equations of linear elasticity on tetrahedral meshes. The finite element combines, for $r \geq 0$, discontinuous polynomials of $r$ for the displacement, $H(\mathrm{div})$-conforming polynomials of order $r+1$ for the stress, and $H(\mathrm{curl})$-conforming polynomials of order $r+1$ for the vector representation of the multiplier. We prove that this triplet is stable and has optimal approximation properties. The lowest order case can be combined with inexact quadrature to eliminate the stress and multiplier variables, leaving a compact cell-centered finite volume scheme for the displacement.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
89,870
2407.19040
A Fault Prognostic System for the Turbine Guide Bearings of a Hydropower Plant Using Long-Short Term Memory (LSTM)
Hydroelectricity, being a renewable source of energy, globally fulfills the electricity demand. Hence, Hydropower Plants (HPPs) have always been in the limelight of research. The fast-paced technological advancement is enabling us to develop state-of-the-art power generation machines. This has not only resulted in improved turbine efficiency but has also increased the complexity of these systems. In lieu thereof, efficient Operation & Maintenance (O&M) of such intricate power generation systems has become a more challenging task. Therefore, there has been a shift from conventional reactive approaches to more intelligent predictive approaches in maintaining the HPPs. The research is therefore targeted to develop an artificially intelligent fault prognostics system for the turbine bearings of an HPP. The proposed method utilizes the Long Short-Term Memory (LSTM) algorithm in developing the model. Initially, the model is trained and tested with bearing vibration data from a test rig. Subsequently, it is further trained and tested with realistic bearing vibration data obtained from an HPP operating in Pakistan via the Supervisory Control and Data Acquisition (SCADA) system. The model demonstrates highly effective predictions of bearing vibration values, achieving a remarkably low RMSE.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
476,619
2201.01703
Probing TryOnGAN
TryOnGAN is a recent virtual try-on approach, which generates highly realistic images and outperforms most previous approaches. In this article, we reproduce the TryOnGAN implementation and probe it along diverse angles: impact of transfer learning, variants of conditioning image generation with poses and properties of latent space interpolation. Some of these facets have never been explored in literature earlier. We find that transfer helps training initially but gains are lost as models train longer and pose conditioning via concatenation performs better. The latent space self-disentangles the pose and the style features and enables style transfer across poses. Our code and models are available in open source.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
274,327
2501.17116
Optimizing Large Language Model Training Using FP4 Quantization
The growing computational demands of training large language models (LLMs) necessitate more efficient methods. Quantized training presents a promising solution by enabling low-bit arithmetic operations to reduce these costs. While FP8 precision has demonstrated feasibility, leveraging FP4 remains a challenge due to significant quantization errors and limited representational capacity. This work introduces the first FP4 training framework for LLMs, addressing these challenges with two key innovations: a differentiable quantization estimator for precise weight updates and an outlier clamping and compensation strategy to prevent activation collapse. To ensure stability, the framework integrates a mixed-precision training scheme and vector-wise quantization. Experimental results demonstrate that our FP4 framework achieves accuracy comparable to BF16 and FP8, with minimal degradation, scaling effectively to 13B-parameter LLMs trained on up to 100B tokens. With the emergence of next-generation hardware supporting FP4, our framework sets a foundation for efficient ultra-low precision training.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
528,215
1403.6106
Fragmentation transition in a coevolving network with link-state dynamics
We study a network model that couples the dynamics of link states with the evolution of the network topology. The state of each link, either A or B, is updated according to the majority rule or zero-temperature Glauber dynamics, in which links adopt the state of the majority of their neighboring links in the network. Additionally, a link that is in a local minority is rewired to a randomly chosen node. While large systems evolving under the majority rule alone always fall into disordered topological traps composed by frustrated links, any amount of rewiring is able to drive the network to complete order, by relinking frustrated links and so releasing the system from traps. However, depending on the relative rate of the majority rule and the rewiring processes, the system evolves towards different ordered absorbing configurations: either a one-component network with all links in the same state or a network fragmented in two components with opposite states. For low rewiring rates and finite size networks there is a domain of bistability between fragmented and non-fragmented final states. Finite size scaling indicates that fragmentation is the only possible scenario for large systems and any nonzero rate of rewiring.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
31,793
2204.02587
Learning to Anticipate Future with Dynamic Context Removal
Anticipating future events is an essential feature for intelligent systems and embodied AI. However, compared to the traditional recognition task, the uncertainty of future and reasoning ability requirement make the anticipation task very challenging and far beyond solved. In this filed, previous methods usually care more about the model architecture design or but few attention has been put on how to train an anticipation model with a proper learning policy. To this end, in this work, we propose a novel training scheme called Dynamic Context Removal (DCR), which dynamically schedules the visibility of observed future in the learning procedure. It follows the human-like curriculum learning process, i.e., gradually removing the event context to increase the anticipation difficulty till satisfying the final anticipation target. Our learning scheme is plug-and-play and easy to integrate any reasoning model including transformer and LSTM, with advantages in both effectiveness and efficiency. In extensive experiments, the proposed method achieves state-of-the-art on four widely-used benchmarks. Our code and models are publicly released at https://github.com/AllenXuuu/DCR.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
290,004
2205.03019
A Fingerprint Detection Method by Fingerprint Ridge Orientation Check
Fingerprints are popular among the biometric based systems due to ease of acquisition, uniqueness and availability. Nowadays it is used in smart phone security, digital payment and digital locker. Fingerprint recognition technology has been studied for a long time, and its recognition rate has recently risen to a high level. In particular, with the introduction of Deep Neural Network technologies, the recognition rate that could not be reached before was reached. In this paper, we propose a fingerprint detection algorithm used in a fingerprint recognition system.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
295,147
2003.06530
Ethics in the digital era
Ethics is an ancient matter for human kind, from the origin of civilizations ethics have been related with the most relevant human concerns and determined cultures. Ethics was initially related to religion, politics and philosophy to then be fragmented into specific communities of practice. The undergoing digital revolution enabled by Artificial Intelligence and Data are bringing ethical wicked problems in the social application of these technologies. However, a broader perspective is also necessary. We now face global and highly dynamics challenges that affect groups and individuals, specially those that are most vulnerable. Individual-oriented ethics are no longer sufficient, the new ethic has to consider the several scales in which the current complex society is organized and the interconnections between different systems. Ethics should also give a response to the systemic changes in behavior produced by external factors and threats. Furthermore, AI and digital technologies are global and make us more connected and smart but also more homogeneous, predictable and ultimately controllable. Ethic must take a stand to preserve and keep promoting individuals rights and uniqueness and cultural heterogeneity. Digital technologies have to the foundation for new models of society and help ensure ethical individual and collective values. For these reasons science has to be at the core of the new ethic as it helps understand the complex world. Finally, AI has advanced through the ambition to humanize matter, so we should expect ethics to give a response to the future status of machines and their interactions with humans.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
168,144
2407.10424
CodeV: Empowering LLMs for Verilog Generation through Multi-Level Summarization
The increasing complexity and high costs associated with modern processor design have led to a surge in demand for processor design automation. Instruction-tuned large language models (LLMs) have demonstrated remarkable performance in automatically generating code for general-purpose programming languages like Python. However, these methods fail on hardware description languages (HDLs) like Verilog due to the scarcity of high-quality instruction tuning data, as even advanced LLMs like GPT-3.5 exhibit limited performance on Verilog generation. Regarding this issue, we observe that (1) Verilog code collected from the real world has higher quality than those generated by LLMs. (2) LLMs like GPT-3.5 excel in summarizing Verilog code rather than generating it. Based on these observations, this paper introduces CodeV, a series of open-source instruction-tuned Verilog generation LLMs. Instead of generating descriptions first and then getting the corresponding code from advanced LLMs, we prompt the LLM with Verilog code and let the LLM generate the corresponding natural language description by multi-level summarization. Experimental results show that CodeV relatively surpasses the previous open-source SOTA by 14.4% (BetterV in VerilogEval) and 11.3% (RTLCoder in RTLLM) respectively, and also relatively outperforms previous commercial SOTA GPT-4 by 22.1% in VerilogEval.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
472,982
2105.11581
Resource Allocation for Massive MIMO HetNets with Quantize-Forward Relaying
We investigate how massive MIMO impacts the uplink transmission design in a heterogeneous network (HetNet) where multiple users communicate with a macro-cell base station (MCBS) with the help of a small-cell BS (SCBS) with zero-forcing (ZF) detection at each BS. We first analyze the quantize-forward (QF) relaying scheme with joint decoding (JD) at the MCBS. To maximize the rate region, we optimize the quantization of all user data streams at the SCBS by developing a novel water-filling algorithm that is based on the Descartes' rule of signs. Our result shows that as a user link to the SCBS becomes stronger than that to the MCBS, the SCBS deploys finer quantization to that user data stream. We further propose a new simplified scheme through Wyner-Ziv (WZ) binning and time-division (TD) transmission at the SCBS, which allows not only sequential but also separate decoding of each user message at the MCBS. For this new QF-WZTD scheme, the optimal quantization parameters are identical to that of the QF-JD scheme while the phase durations are conveniently optimized as functions of the quantization parameters. Despite its simplicity, the QF-WZTD scheme achieves the same rate performance of the QF-JD scheme, making it an attractive option for future HetNets.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
236,742
2202.02643
The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training
Random pruning is arguably the most naive way to attain sparsity in neural networks, but has been deemed uncompetitive by either post-training pruning or sparse training. In this paper, we focus on sparse training and highlight a perhaps counter-intuitive finding, that random pruning at initialization can be quite powerful for the sparse training of modern neural networks. Without any delicate pruning criteria or carefully pursued sparsity structures, we empirically demonstrate that sparsely training a randomly pruned network from scratch can match the performance of its dense equivalent. There are two key factors that contribute to this revival: (i) the network sizes matter: as the original dense networks grow wider and deeper, the performance of training a randomly pruned sparse network will quickly grow to matching that of its dense equivalent, even at high sparsity ratios; (ii) appropriate layer-wise sparsity ratios can be pre-chosen for sparse training, which shows to be another important performance booster. Simple as it looks, a randomly pruned subnetwork of Wide ResNet-50 can be sparsely trained to outperforming a dense Wide ResNet-50, on ImageNet. We also observed such randomly pruned networks outperform dense counterparts in other favorable aspects, such as out-of-distribution detection, uncertainty estimation, and adversarial robustness. Overall, our results strongly suggest there is larger-than-expected room for sparse training at scale, and the benefits of sparsity might be more universal beyond carefully designed pruning. Our source code can be found at https://github.com/VITA-Group/Random_Pruning.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
278,887
2003.09451
Learning reduced systems via deep neural networks with memory
We present a general numerical approach for constructing governing equations for unknown dynamical systems when only data on a subset of the state variables are available. The unknown equations for these observed variables are thus a reduced system of the complete set of state variables. Reduced systems possess memory integrals, based on the well known Mori-Zwanzig (MZ) formulism. Our numerical strategy to recover the reduced system starts by formulating a discrete approximation of the memory integral in the MZ formulation. The resulting unknown approximate MZ equations are of finite dimensional, in the sense that a finite number of past history data are involved. We then present a deep neural network structure that directly incorporates the history terms to produce memory in the network. The approach is suitable for any practical systems with finite memory length. We then use a set of numerical examples to demonstrate the effectiveness of our method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
169,051
2405.01149
Optimizing Satellite Network Infrastructure: A Joint Approach to Gateway Placement and Routing
Satellite constellation systems are becoming more attractive to provide communication services worldwide, especially in areas without network connectivity. While optimizing satellite gateway placement is crucial for operators to minimize deployment and operating costs, reducing the number of gateways may require more inter-satellite link hops to reach the ground network, thereby increasing latency. Therefore, it is of significant importance to develop a framework that optimizes gateway placement, dynamic routing, and flow management in inter-satellite links to enhance network performance. To this end, we model an optimization problem as a mixed-integer problem with a cost function combining the number of gateways, flow allocation, and traffic latency, allowing satellite operators to set priorities based on their policies. Our simulation results indicate that the proposed approach effectively reduces the number of active gateways by selecting their most appropriate locations while balancing the trade-off between the number of gateways and traffic latency. Furthermore, we demonstrate the impact of different weights in the cost function on performance through comparative analysis.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
451,248
1806.04010
Fully automated primary particle size analysis of agglomerates on transmission electron microscopy images via artificial neural networks
There is a high demand for fully automated methods for the analysis of primary particle size distributions of agglomerates on transmission electron microscopy images. Therefore, a novel method, based on the utilization of artificial neural networks, was proposed, implemented and validated. The training of the artificial neural networks requires large quantities (up to several hundreds of thousands) of transmission electron microscopy images of agglomerates consisting of primary particles with known sizes. Since the manual evaluation of such large amounts of transmission electron microscopy images is not feasible, a synthesis of lifelike transmission electron microscopy images as training data was implemented. The proposed method can compete with state-of-the-art automated imaging particle size methods like the Hough transformation, ultimate erosion and watershed transformation and is in some cases even able to outperform these methods. It is however still outperformed by the manual analysis.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
100,141
2410.02042
EAB-FL: Exacerbating Algorithmic Bias through Model Poisoning Attacks in Federated Learning
Federated Learning (FL) is a technique that allows multiple parties to train a shared model collaboratively without disclosing their private data. It has become increasingly popular due to its distinct privacy advantages. However, FL models can suffer from biases against certain demographic groups (e.g., racial and gender groups) due to the heterogeneity of data and party selection. Researchers have proposed various strategies for characterizing the group fairness of FL algorithms to address this issue. However, the effectiveness of these strategies in the face of deliberate adversarial attacks has not been fully explored. Although existing studies have revealed various threats (e.g., model poisoning attacks) against FL systems caused by malicious participants, their primary aim is to decrease model accuracy, while the potential of leveraging poisonous model updates to exacerbate model unfairness remains unexplored. In this paper, we propose a new type of model poisoning attack, EAB-FL, with a focus on exacerbating group unfairness while maintaining a good level of model utility. Extensive experiments on three datasets demonstrate the effectiveness and efficiency of our attack, even with state-of-the-art fairness optimization algorithms and secure aggregation rules employed.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
494,070
2405.02503
Axiomatic Causal Interventions for Reverse Engineering Relevance Computation in Neural Retrieval Models
Neural models have demonstrated remarkable performance across diverse ranking tasks. However, the processes and internal mechanisms along which they determine relevance are still largely unknown. Existing approaches for analyzing neural ranker behavior with respect to IR properties rely either on assessing overall model behavior or employing probing methods that may offer an incomplete understanding of causal mechanisms. To provide a more granular understanding of internal model decision-making processes, we propose the use of causal interventions to reverse engineer neural rankers, and demonstrate how mechanistic interpretability methods can be used to isolate components satisfying term-frequency axioms within a ranking model. We identify a group of attention heads that detect duplicate tokens in earlier layers of the model, then communicate with downstream heads to compute overall document relevance. More generally, we propose that this style of mechanistic analysis opens up avenues for reverse engineering the processes neural retrieval models use to compute relevance. This work aims to initiate granular interpretability efforts that will not only benefit retrieval model development and training, but ultimately ensure safer deployment of these models.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
451,786
2003.08727
Decentralized MCTS via Learned Teammate Models
Decentralized online planning can be an attractive paradigm for cooperative multi-agent systems, due to improved scalability and robustness. A key difficulty of such approach lies in making accurate predictions about the decisions of other agents. In this paper, we present a trainable online decentralized planning algorithm based on decentralized Monte Carlo Tree Search, combined with models of teammates learned from previous episodic runs. By only allowing one agent to adapt its models at a time, under the assumption of ideal policy approximation, successive iterations of our method are guaranteed to improve joint policies, and eventually lead to convergence to a Nash equilibrium. We test the efficiency of the algorithm by performing experiments in several scenarios of the spatial task allocation environment introduced in [Claes et al., 2015]. We show that deep learning and convolutional neural networks can be employed to produce accurate policy approximators which exploit the spatial features of the problem, and that the proposed algorithm improves over the baseline planning performance for particularly challenging domain configurations.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
168,822
1907.06884
Deep Reinforcement Learning Based Robot Arm Manipulation with Efficient Training Data through Simulation
Deep reinforcement learning trains neural networks using experiences sampled from the replay buffer, which is commonly updated at each time step. In this paper, we propose a method to update the replay buffer adaptively and selectively to train a robot arm to accomplish a suction task in simulation. The response time of the agent is thoroughly taken into account. The state transitions that remain stuck at the boundary of constraint are not stored. The policy trained with our method works better than the one with the common replay buffer update method. The result is demonstrated both by simulation and by experiment with a real robot arm.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
138,732
2312.01612
xNeuSM: Explainable Neural Subgraph Matching with Graph Learnable Multi-hop Attention Networks
Subgraph matching is a challenging problem with a wide range of applications in database systems, biochemistry, and cognitive science. It involves determining whether a given query graph is present within a larger target graph. Traditional graph-matching algorithms provide precise results but face challenges in large graph instances due to the NP-complete problem, limiting their practical applicability. In contrast, recent neural network-based approximations offer more scalable solutions, but often lack interpretable node correspondences. To address these limitations, this article presents xNeuSM: Explainable Neural Subgraph Matching which introduces Graph Learnable Multi-hop Attention Networks (GLeMA) that adaptively learns the parameters governing the attention factor decay for each node across hops rather than relying on fixed hyperparameters. We provide a theoretical analysis establishing error bounds for GLeMA's approximation of multi-hop attention as a function of the number of hops. Additionally, we prove that learning distinct attention decay factors for each node leads to a correct approximation of multi-hop attention. Empirical evaluation on real-world datasets shows that xNeuSM achieves substantial improvements in prediction accuracy of up to 34% compared to approximate baselines and, notably, at least a seven-fold faster query time than exact algorithms. The source code of our implementation is available at https://github.com/martinakaduc/xNeuSM.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
412,515
2304.11123
China and the U.S. produce more impactful AI research when collaborating together
Artificial Intelligence (AI) has become a disruptive technology, promising to grant a significant economic and strategic advantage to nations that harness its power. China, with its recent push towards AI adoption, is challenging the U.S.'s position as the global leader in this field. Given AI's massive potential, as well as the fierce geopolitical tensions between China and the U.S., several recent policies have been put in place to discourage AI scientists from migrating to, or collaborating with, the other nation. Nevertheless, the extent of talent migration and cross-border collaboration are not fully understood. Here, we analyze a dataset of over 350,000 AI scientists and 5,000,000 AI papers. We find that since 2000, China and the U.S. have led the field in terms of impact, novelty, productivity, and workforce. Most AI scientists who move to China come from the U.S., and most who move to the U.S. come from China, highlighting a notable bidirectional talent migration. Moreover, the vast majority of those moving in either direction have Asian ancestry. Upon moving, those scientists continue to collaborate frequently with those in the origin country. Although the number of collaborations between the two countries has increased since the dawn of the millennium, such collaborations continue to be relatively rare. A matching experiment reveals that the two countries have always been more impactful when collaborating than when each works without the other. These findings suggest that instead of suppressing cross-border migration and collaboration between the two nations, the science could benefit from promoting such activities.
false
false
false
true
true
false
false
false
false
false
false
false
false
true
false
false
false
false
359,693
2304.14660
Segment Anything Model for Medical Images?
The Segment Anything Model (SAM) is the first foundation model for general image segmentation. It has achieved impressive results on various natural image segmentation tasks. However, medical image segmentation (MIS) is more challenging because of the complex modalities, fine anatomical structures, uncertain and complex object boundaries, and wide-range object scales. To fully validate SAM's performance on medical data, we collected and sorted 53 open-source datasets and built a large medical segmentation dataset with 18 modalities, 84 objects, 125 object-modality paired targets, 1050K 2D images, and 6033K masks. We comprehensively analyzed different models and strategies on the so-called COSMOS 1050K dataset. Our findings mainly include the following: 1) SAM showed remarkable performance in some specific objects but was unstable, imperfect, or even totally failed in other situations. 2) SAM with the large ViT-H showed better overall performance than that with the small ViT-B. 3) SAM performed better with manual hints, especially box, than the Everything mode. 4) SAM could help human annotation with high labeling quality and less time. 5) SAM was sensitive to the randomness in the center point and tight box prompts, and may suffer from a serious performance drop. 6) SAM performed better than interactive methods with one or a few points, but will be outpaced as the number of points increases. 7) SAM's performance correlated to different factors, including boundary complexity, intensity differences, etc. 8) Finetuning the SAM on specific medical tasks could improve its average DICE performance by 4.39% and 6.68% for ViT-B and ViT-H, respectively. We hope that this comprehensive report can help researchers explore the potential of SAM applications in MIS, and guide how to appropriately use and develop SAM.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
361,054
2412.15726
Fine-tuning Whisper on Low-Resource Languages for Real-World Applications
This paper presents a new approach to fine-tuning OpenAI's Whisper model for low-resource languages by introducing a novel data generation method that converts sentence-level data into a long-form corpus, using Swiss German as a case study. Non-sentence-level data, which could improve the performance of long-form audio, is difficult to obtain and often restricted by copyright laws. Our method bridges this gap by transforming more accessible sentence-level data into a format that preserves the model's ability to handle long-form audio and perform segmentation without requiring non-sentence-level data. Our data generation process improves performance in several real-world applications and leads to the development of a new state-of-the-art speech-to-text (STT) model for Swiss German. We compare our model with a non-fine-tuned Whisper and our previous state-of-the-art Swiss German STT models, where our new model achieves higher BLEU scores. Our results also indicate that the proposed method is adaptable to other low-resource languages, supported by written guidance and code that allows the creation of fine-tuned Whisper models, which keep segmentation capabilities and allow the transcription of longer audio files using only sentence-level data with high quality.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
519,249
1508.01718
Study of Phonemes Confusions in Hierarchical Automatic Phoneme Recognition System
In this paper, we have analyzed the impact of confusions on the robustness of phoneme recognitions system. The confusions are detected at the pronunciation and the confusions matrices of the phoneme recognizer. The confusions show that some similarities between phonemes at the pronunciation affect significantly the recognition rates. This paper proposes to understand those confusions in order to improve the performance of the phoneme recognition system by isolating the problematic phonemes. Confusion analysis leads to build a new hierarchical recognizer using new phoneme distribution and the information from the confusion matrices. This new hierarchical phoneme recognition system shows significant improvements of the recognition rates on TIMIT database.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
45,812
2207.14801
Recognition of Handwritten Chinese Text by Segmentation: A Segment-annotation-free Approach
Online and offline handwritten Chinese text recognition (HTCR) has been studied for decades. Early methods adopted oversegmentation-based strategies but suffered from low speed, insufficient accuracy, and high cost of character segmentation annotations. Recently, segmentation-free methods based on connectionist temporal classification (CTC) and attention mechanism, have dominated the field of HCTR. However, people actually read text character by character, especially for ideograms such as Chinese. This raises the question: are segmentation-free strategies really the best solution to HCTR? To explore this issue, we propose a new segmentation-based method for recognizing handwritten Chinese text that is implemented using a simple yet efficient fully convolutional network. A novel weakly supervised learning method is proposed to enable the network to be trained using only transcript annotations; thus, the expensive character segmentation annotations required by previous segmentation-based methods can be avoided. Owing to the lack of context modeling in fully convolutional networks, we propose a contextual regularization method to integrate contextual information into the network during the training stage, which can further improve the recognition performance. Extensive experiments conducted on four widely used benchmarks, namely CASIA-HWDB, CASIA-OLHWDB, ICDAR2013, and SCUT-HCCDoc, show that our method significantly surpasses existing methods on both online and offline HCTR, and exhibits a considerably higher inference speed than CTC/attention-based approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
310,710
2501.03654
Data Augmentation for Deep Learning Regression Tasks by Machine Learning Models
Deep learning (DL) models have gained prominence in domains such as computer vision and natural language processing but remain underutilized for regression tasks involving tabular data. In these cases, traditional machine learning (ML) models often outperform DL models. In this study, we propose and evaluate various data augmentation (DA) techniques to improve the performance of DL models for tabular data regression tasks. We compare the performance gain of Neural Networks by different DA strategies ranging from a naive method of duplicating existing observations and adding noise to a more sophisticated DA strategy that preserves the underlying statistical relationship in the data. Our analysis demonstrates that the advanced DA method significantly improves DL model performance across multiple datasets and regression tasks, resulting in an average performance increase of over 10\% compared to baseline models without augmentation. The efficacy of these DA strategies was rigorously validated across 30 distinct datasets, with multiple iterations and evaluations using three different automated deep learning (AutoDL) frameworks: AutoKeras, H2O, and AutoGluon. This study demonstrates that by leveraging advanced DA techniques, DL models can realize their full potential in regression tasks, thereby contributing to broader adoption and enhanced performance in practical applications.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
522,952
1811.00241
On the End-to-End Solution to Mandarin-English Code-switching Speech Recognition
Code-switching (CS) refers to a linguistic phenomenon where a speaker uses different languages in an utterance or between alternating utterances. In this work, we study end-to-end (E2E) approaches to the Mandarin-English code-switching speech recognition (CSSR) task. We first examine the effectiveness of using data augmentation and byte-pair encoding (BPE) subword units. More importantly, we propose a multitask learning recipe, where a language identification task is explicitly learned in addition to the E2E speech recognition task. Furthermore, we introduce an efficient word vocabulary expansion method for language modeling to alleviate data sparsity issues under the code-switching scenario. Experimental results on the SEAME data, a Mandarin-English CS corpus, demonstrate the effectiveness of the proposed methods.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
112,050
2006.00719
ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning
We introduce ADAHESSIAN, a second order stochastic optimization algorithm which dynamically incorporates the curvature of the loss function via ADAptive estimates of the HESSIAN. Second order algorithms are among the most powerful optimization algorithms with superior convergence properties as compared to first order methods such as SGD and Adam. The main disadvantage of traditional second order methods is their heavier per iteration computation and poor accuracy as compared to first order methods. To address these, we incorporate several novel approaches in ADAHESSIAN, including: (i) a fast Hutchinson based method to approximate the curvature matrix with low computational overhead; (ii) a root-mean-square exponential moving average to smooth out variations of the Hessian diagonal across different iterations; and (iii) a block diagonal averaging to reduce the variance of Hessian diagonal elements. We show that ADAHESSIAN achieves new state-of-the-art results by a large margin as compared to other adaptive optimization methods, including variants of Adam. In particular, we perform extensive tests on CV, NLP, and recommendation system tasks and find that ADAHESSIAN: (i) achieves 1.80%/1.45% higher accuracy on ResNets20/32 on Cifar10, and 5.55% higher accuracy on ImageNet as compared to Adam; (ii) outperforms AdamW for transformers by 0.13/0.33 BLEU score on IWSLT14/WMT14 and 2.7/1.0 PPL on PTB/Wikitext-103; (iii) outperforms AdamW for SqueezeBert by 0.41 points on GLUE; and (iv) achieves 0.032% better score than Adagrad for DLRM on the Criteo Ad Kaggle dataset. Importantly, we show that the cost per iteration of ADAHESSIAN is comparable to first order methods, and that it exhibits robustness towards its hyperparameters.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
179,555
2004.03266
Self-Adjusting Evolutionary Algorithms for Multimodal Optimization
Recent theoretical research has shown that self-adjusting and self-adaptive mechanisms can provably outperform static settings in evolutionary algorithms for binary search spaces. However, the vast majority of these studies focuses on unimodal functions which do not require the algorithm to flip several bits simultaneously to make progress. In fact, existing self-adjusting algorithms are not designed to detect local optima and do not have any obvious benefit to cross large Hamming gaps. We suggest a mechanism called stagnation detection that can be added as a module to existing evolutionary algorithms (both with and without prior self-adjusting algorithms). Added to a simple (1+1) EA, we prove an expected runtime on the well-known Jump benchmark that corresponds to an asymptotically optimal parameter setting and outperforms other mechanisms for multimodal optimization like heavy-tailed mutation. We also investigate the module in the context of a self-adjusting (1+$\lambda$) EA and show that it combines the previous benefits of this algorithm on unimodal problems with more efficient multimodal optimization. To explore the limitations of the approach, we additionally present an example where both self-adjusting mechanisms, including stagnation detection, do not help to find a beneficial setting of the mutation rate. Finally, we investigate our module for stagnation detection experimentally.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
171,499
2004.11440
Human Factors in Model Interpretability: Industry Practices, Challenges, and Needs
As the use of machine learning (ML) models in product development and data-driven decision-making processes became pervasive in many domains, people's focus on building a well-performing model has increasingly shifted to understanding how their model works. While scholarly interest in model interpretability has grown rapidly in research communities like HCI, ML, and beyond, little is known about how practitioners perceive and aim to provide interpretability in the context of their existing workflows. This lack of understanding of interpretability as practiced may prevent interpretability research from addressing important needs, or lead to unrealistic solutions. To bridge this gap, we conducted 22 semi-structured interviews with industry practitioners to understand how they conceive of and design for interpretability while they plan, build, and use their models. Based on a qualitative analysis of our results, we differentiate interpretability roles, processes, goals and strategies as they exist within organizations making heavy use of ML models. The characterization of interpretability work that emerges from our analysis suggests that model interpretability frequently involves cooperation and mental model comparison between people in different roles, often aimed at building trust not only between people and models but also between people within the organization. We present implications for design that discuss gaps between the interpretability challenges that practitioners face in their practice and approaches proposed in the literature, highlighting possible research directions that can better address real-world needs.
true
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
173,905
2411.05885
Alternative Learning Paradigms for Image Quality Transfer
Image Quality Transfer (IQT) aims to enhance the contrast and resolution of low-quality medical images, e.g. obtained from low-power devices, with rich information learned from higher quality images. In contrast to existing IQT methods which adopt supervised learning frameworks, in this work, we propose two novel formulations of the IQT problem. The first approach uses an unsupervised learning framework, whereas the second is a combination of both supervised and unsupervised learning. The unsupervised learning approach considers a sparse representation (SRep) and dictionary learning model, which we call IQT-SRep, whereas the combination of supervised and unsupervised learning approach is based on deep dictionary learning (DDL), which we call IQT-DDL. The IQT-SRep approach trains two dictionaries using a SRep model using pairs of low- and high-quality volumes. Subsequently, the SRep of a low-quality block, in terms of the low-quality dictionary, can be directly used to recover the corresponding high-quality block using the high-quality dictionary. On the other hand, the IQT-DDL approach explicitly learns a high-resolution dictionary to upscale the input volume, while the entire network, including high dictionary generator, is simultaneously optimised to take full advantage of deep learning methods. The two models are evaluated using a low-field magnetic resonance imaging (MRI) application aiming to recover high-quality images akin to those obtained from high-field scanners. Experiments comparing the proposed approaches against state-of-the-art supervised deep learning IQT method (IQT-DL) identify that the two novel formulations of the IQT problem can avoid bias associated with supervised methods when tested using out-of-distribution data that differs from the distribution of the data the model was trained on. This highlights the potential benefit of these novel paradigms for IQT.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
506,873
2404.18470
ECC Analyzer: Extract Trading Signal from Earnings Conference Calls using Large Language Model for Stock Performance Prediction
In the realm of financial analytics, leveraging unstructured data, such as earnings conference calls (ECCs), to forecast stock volatility is a critical challenge that has attracted both academics and investors. While previous studies have used multimodal deep learning-based models to obtain a general view of ECCs for volatility predicting, they often fail to capture detailed, complex information. Our research introduces a novel framework: \textbf{ECC Analyzer}, which utilizes large language models (LLMs) to extract richer, more predictive content from ECCs to aid the model's prediction performance. We use the pre-trained large models to extract textual and audio features from ECCs and implement a hierarchical information extraction strategy to extract more fine-grained information. This strategy first extracts paragraph-level general information by summarizing the text and then extracts fine-grained focus sentences using Retrieval-Augmented Generation (RAG). These features are then fused through multimodal feature fusion to perform volatility prediction. Experimental results demonstrate that our model outperforms traditional analytical benchmarks, confirming the effectiveness of advanced LLM techniques in financial analysis.
false
true
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
450,282
2402.03072
Learning to Abstract Visuomotor Mappings using Meta-Reinforcement Learning
We investigated the human capacity to acquire multiple visuomotor mappings for de novo skills. Using a grid navigation paradigm, we tested whether contextual cues implemented as different "grid worlds", allow participants to learn two distinct key-mappings more efficiently. Our results indicate that when contextual information is provided, task performance is significantly better. The same held true for meta-reinforcement learning agents that differed in whether or not they receive contextual information when performing the task. We evaluated their accuracy in predicting human performance in the task and analyzed their internal representations. The results indicate that contextual cues allow the formation of separate representations in space and time when using different visuomotor mappings, whereas the absence of them favors sharing one representation. While both strategies can allow learning of multiple visuomotor mappings, we showed contextual cues provide a computational advantage in terms of how many mappings can be learned.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
426,844
1802.06138
Detecting Social Influence in Event Cascades by Comparing Discriminative Rankers
The global dynamics of event cascades are often governed by the local dynamics of peer influence. However, detecting social influence from observational data is challenging due to confounds like homophily and practical issues like missing data. We propose a simple discriminative method to detect influence from observational data. The core of the approach is to train a ranking algorithm to predict the source of the next event in a cascade, and compare its out-of-sample accuracy against a competitive baseline which lacks access to features corresponding to social influence. We analyze synthetically generated data to show that this method correctly identifies influence in the presence of confounds, and is robust to both missing data and misspecification --- unlike well-known alternatives. We apply the method to two real-world datasets: (1) the co-sponsorship of legislation in the U.S. House of Representatives on a social network of shared campaign donors; (2) rumors about the Higgs boson discovery on a follower network of $10^5$ Twitter accounts. Our model identifies the role of social influence in these scenarios and uses it to make more accurate predictions about the future trajectory of cascades.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
90,598
2411.01244
Precoded faster-than-Nyquist signaling using optimal power allocation for OTFS
A precoded orthogonal time frequency space (OTFS) modulation scheme relying on faster-than-Nyquist (FTN) transmission over doubly selective fading channels is {proposed}, which enhances the spectral efficiency and improves the Doppler resilience. We derive the input-output relationship of the FTN signaling in the delay-Doppler domain. Eigenvalue decomposition (EVD) is used for eliminating both the effects of inter-symbol interference and correlated additive noise encountered in the delay-Doppler domain to enable efficient symbol-by-symbol demodulation. Furthermore, the power allocation coefficients of individual frames are optimized for maximizing the mutual information under the constraint of the derived total transmit power. Our performance results demonstrate that the proposed FTN-based OTFS scheme can enhance the information rate while achieving a comparable BER performance to that of its conventional Nyquist-based OTFS counterpart that employs the same root-raised-cosine shaping filter.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
504,980
2407.09888
FarFetched: Entity-centric Reasoning and Claim Validation for the Greek Language based on Textually Represented Environments
Our collective attention span is shortened by the flood of online information. With \textit{FarFetched}, we address the need for automated claim validation based on the aggregated evidence derived from multiple online news sources. We introduce an entity-centric reasoning framework in which latent connections between events, actions, or statements are revealed via entity mentions and represented in a graph database. Using entity linking and semantic similarity, we offer a way for collecting and combining information from diverse sources in order to generate evidence relevant to the user's claim. Then, we leverage textual entailment recognition to quantitatively determine whether this assertion is credible, based on the created evidence. Our approach tries to fill the gap in automated claim validation for less-resourced languages and is showcased on the Greek language, complemented by the training of relevant semantic textual similarity (STS) and natural language inference (NLI) models that are evaluated on translated versions of common benchmarks.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
472,759
2010.04545
Study on Leveraging Wind Farm Reactive Power Potential for Uncertain Power System Reactive Power Optimization
This paper suggests leveraging reactive power potential (RPP) embedded in wind farms to improve power system operational safety and optimality. First, three typical RPP provision approaches are analyzed and a two-stage robust linear optimization based RPP evaluation method is proposed. This approach yields an RPP range that ensures the security of wind farm operations under any realization of uncertainty regarding the wind farm. Simplified DistFlow equations are employed here for a compromise between computational accuracy and cost. Next, an uncertain RPP-involved reactive power optimization problem is introduced, through which system operators ensure system-wide security and optimality regarding the base case and against any possible deviation caused by uncertain lumped loads and renewable generation. Steady-state models of automatic generation control and local voltage control are also captured in this uncertain reactive power optimization, which is then transformed through Soyster's method into a deterministic optimization problem that is readily solvable. Case studies have conceptually validated that even with notable uncertainty, wind farms are still a competent reactive power resource providing considerable RPP. Also, simulation confirms positive and notable improvement of leveraging wind-farm RPP on system-wide operational security and optimality, especially for power systems with high wind penetration.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
199,782
2201.08538
Computation of Regions of Attraction for Hybrid Limit Cycles Using Reachability: An Application to Walking Robots
Contact-rich robotic systems, such as legged robots and manipulators, are often represented as hybrid systems. However, the stability analysis and region-of-attraction computation for these systems are often challenging because of the discontinuous state changes upon contact (also referred to as state resets). In this work, we cast the computation of region-ofattraction as a Hamilton-Jacobi (HJ) reachability problem. This enables us to leverage HJ reachability tools that are compatible with general nonlinear system dynamics, and can formally deal with state and input constraints as well as bounded disturbances. Our main contribution is the generalization of HJ reachability framework to account for the discontinuous state changes originating from state resets, which has remained a challenge until now. We apply our approach for computing region-of-attractions for several underactuated walking robots and demonstrate that the proposed approach can (a) recover a bigger region-of-attraction than state-of-the-art approaches, (b) handle state resets, nonlinear dynamics, external disturbances, and input constraints, and (c) also provides a stabilizing controller for the system that can leverage the state resets for enhancing system stability.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
276,372
1911.00154
Construction of Const Dimension Codes from Serval Parallel Lift MRD Code
In this paper, we generalize the method of using two parallel versions of the lifted MRD code from the existing work [1]. The Delsarte theorem of the rank distribution of MRD codes is an important part to count codewords in our construction. We give a new generalize construction to the following bounds: if n>=k>=d, then $Aq(n + k,k,d)>=q^{n(k-\frac{d}{2}+1)}+\sum_{r=\frac{d}{2}}^{k-\frac{d}{2}} A_r(Q_q(n,k,\frac{d}{2})).$ On this basis, we also give a construction of constant-dimension subspace codes from several parallel versions of lifted MRD codes. This construction contributes to a new lower bounds for Aq((s+1)k+n,d,k).
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
151,739
2207.03341
Softmax-free Linear Transformers
Vision transformers (ViTs) have pushed the state-of-the-art for visual perception tasks. The self-attention mechanism underpinning the strength of ViTs has a quadratic complexity in both computation and memory usage. This motivates the development of approximating the self-attention at linear complexity. However, an in-depth analysis in this work reveals that existing methods are either theoretically flawed or empirically ineffective for visual recognition. We identify that their limitations are rooted in the inheritance of softmax-based self-attention during approximations, that is, normalizing the scaled dot-product between token feature vectors using the softmax function. As preserving the softmax operation challenges any subsequent linearization efforts. By this insight, a family of Softmax-Free Transformers (SOFT) are proposed. Specifically, a Gaussian kernel function is adopted to replace the dot-product similarity, enabling a full self-attention matrix to be approximated under low-rank matrix decomposition. For computational robustness, we estimate the Moore-Penrose inverse using an iterative Newton-Raphson method in the forward process only, while calculating its theoretical gradients only once in the backward process. To further expand applicability (e.g., dense prediction tasks), an efficient symmetric normalization technique is introduced. Extensive experiments on ImageNet, COCO, and ADE20K show that our SOFT significantly improves the computational efficiency of existing ViT variants. With linear complexity, much longer token sequences are permitted by SOFT, resulting in superior trade-off between accuracy and complexity. Code and models are available at https://github.com/fudan-zvg/SOFT.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
306,807
2205.03471
Dynamically writing coupled memories using a reinforcement learning agent, meeting physical bounds
Traditional memory writing operations proceed one bit at a time, where e.g. an individual magnetic domain is force-flipped by a localized external field. One way to increase material storage capacity would be to write several bits at a time in the bulk of the material. However, the manipulation of bits is commonly done through quasi-static operations. While simple to model, this method is known to reduce memory capacity. In this paper, we demonstrate how a reinforcement learning agent can exploit the dynamical response of a simple multi-bit mechanical system to restore its memory to full capacity. To do so, we introduce a model framework consisting of a chain of bi-stable springs, which is manipulated on one end by the external action of the agent. We show that the agent manages to learn how to reach all available states for three springs, even though some states are not reachable through adiabatic manipulation, and that both the training speed and convergence within physical parameter space are improved using transfer learning techniques. Interestingly, the agent also points to an optimal design of the system in terms of writing time. In fact, it appears to learn how to take advantage of the underlying physics: the control time exhibits a non-monotonic dependence on the internal dissipation, reaching a minimum at a cross-over shown to verify a mechanically motivated scaling relation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
295,289
2111.12928
Facial Depth and Normal Estimation using Single Dual-Pixel Camera
Many mobile manufacturers recently have adopted Dual-Pixel (DP) sensors in their flagship models for faster auto-focus and aesthetic image captures. Despite their advantages, research on their usage for 3D facial understanding has been limited due to the lack of datasets and algorithmic designs that exploit parallax in DP images. This is because the baseline of sub-aperture images is extremely narrow and parallax exists in the defocus blur region. In this paper, we introduce a DP-oriented Depth/Normal network that reconstructs the 3D facial geometry. For this purpose, we collect a DP facial data with more than 135K images for 101 persons captured with our multi-camera structured light systems. It contains the corresponding ground-truth 3D models including depth map and surface normal in metric scale. Our dataset allows the proposed matching network to be generalized for 3D facial depth/normal estimation. The proposed network consists of two novel modules: Adaptive Sampling Module and Adaptive Normal Module, which are specialized in handling the defocus blur in DP images. Finally, the proposed method achieves state-of-the-art performances over recent DP-based depth/normal estimation methods. We also demonstrate the applicability of the estimated depth/normal to face spoofing and relighting.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
268,129
2010.10338
Edge Bias in Federated Learning and its Solution by Buffered Knowledge Distillation
Federated learning (FL), which utilizes communication between the server (core) and local devices (edges) to indirectly learn from more data, is an emerging field in deep learning research. Recently, Knowledge Distillation-based FL methods with notable performance and high applicability have been suggested. In this paper, we choose knowledge distillation-based FL method as our baseline and tackle a challenging problem that ensues from using these methods. Especially, we focus on the problem incurred in the server model that tries to mimic different datasets, each of which is unique to an individual edge device. We dub the problem 'edge bias', which occurs when multiple teacher models trained on different datasets are used individually to distill knowledge. We introduce this nuisance that occurs in certain scenarios of FL, and to alleviate it, we propose a simple yet effective distillation scheme named 'buffered distillation'. In addition, we also experimentally show that this scheme is effective in mitigating the straggler problem caused by delayed edges.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
201,856
2303.13826
Hard Sample Matters a Lot in Zero-Shot Quantization
Zero-shot quantization (ZSQ) is promising for compressing and accelerating deep neural networks when the data for training full-precision models are inaccessible. In ZSQ, network quantization is performed using synthetic samples, thus, the performance of quantized models depends heavily on the quality of synthetic samples. Nonetheless, we find that the synthetic samples constructed in existing ZSQ methods can be easily fitted by models. Accordingly, quantized models obtained by these methods suffer from significant performance degradation on hard samples. To address this issue, we propose HArd sample Synthesizing and Training (HAST). Specifically, HAST pays more attention to hard samples when synthesizing samples and makes synthetic samples hard to fit when training quantized models. HAST aligns features extracted by full-precision and quantized models to ensure the similarity between features extracted by these two models. Extensive experiments show that HAST significantly outperforms existing ZSQ methods, achieving performance comparable to models that are quantized with real data.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
353,848
2407.06908
Divine LLaMAs: Bias, Stereotypes, Stigmatization, and Emotion Representation of Religion in Large Language Models
Emotions play important epistemological and cognitive roles in our lives, revealing our values and guiding our actions. Previous work has shown that LLMs display biases in emotion attribution along gender lines. However, unlike gender, which says little about our values, religion, as a socio-cultural system, prescribes a set of beliefs and values for its followers. Religions, therefore, cultivate certain emotions. Moreover, these rules are explicitly laid out and interpreted by religious leaders. Using emotion attribution, we explore how different religions are represented in LLMs. We find that: Major religions in the US and European countries are represented with more nuance, displaying a more shaded model of their beliefs. Eastern religions like Hinduism and Buddhism are strongly stereotyped. Judaism and Islam are stigmatized -- the models' refusal skyrocket. We ascribe these to cultural bias in LLMs and the scarcity of NLP literature on religion. In the rare instances where religion is discussed, it is often in the context of toxic language, perpetuating the perception of these religions as inherently toxic. This finding underscores the urgent need to address and rectify these biases. Our research underscores the crucial role emotions play in our lives and how our values influence them.
false
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
471,573
1307.3399
Social Networking Site For Self Portfolio
Online social networking concept is a global phenomenon and there are millions of sites which help in being connected with friends and family. This project focuses on creating self-portfolios for the users which makes the users engaging with their skills. The users follow the other users to interact and communicate with them. Users can encourage the other users blogs and videos by clicking the hit button. The functionality of this site is designed to focus on both professional as well as academics. Each user is given a dashboard for uploading videos and writing blogs.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
25,799
1311.2838
A PAC-Bayesian bound for Lifelong Learning
Transfer learning has received a lot of attention in the machine learning community over the last years, and several effective algorithms have been developed. However, relatively little is known about their theoretical properties, especially in the setting of lifelong learning, where the goal is to transfer information to tasks for which no data have been observed so far. In this work we study lifelong learning from a theoretical perspective. Our main result is a PAC-Bayesian generalization bound that offers a unified view on existing paradigms for transfer learning, such as the transfer of parameters or the transfer of low-dimensional representations. We also use the bound to derive two principled lifelong learning algorithms, and we show that these yield results comparable with existing methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
28,354
2002.12445
Multi-tier Automated Planning for Adaptive Behavior (Extended Version)
A planning domain, as any model, is never complete and inevitably makes assumptions on the environment's dynamic. By allowing the specification of just one domain model, the knowledge engineer is only able to make one set of assumptions, and to specify a single objective-goal. Borrowing from work in Software Engineering, we propose a multi-tier framework for planning that allows the specification of different sets of assumptions, and of different corresponding objectives. The framework aims to support the synthesis of adaptive behavior so as to mitigate the intrinsic risk in any planning modeling task. After defining the multi-tier planning task and its solution concept, we show how to solve problem instances by a succinct compilation to a form of non-deterministic planning. In doing so, our technique justifies the applicability of planning with both fair and unfair actions, and the need for more efforts in developing planning systems supporting dual fairness assumptions.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
166,029
2012.11528
Overcoming Language Priors with Self-supervised Learning for Visual Question Answering
Most Visual Question Answering (VQA) models suffer from the language prior problem, which is caused by inherent data biases. Specifically, VQA models tend to answer questions (e.g., what color is the banana?) based on the high-frequency answers (e.g., yellow) ignoring image contents. Existing approaches tackle this problem by creating delicate models or introducing additional visual annotations to reduce question dependency while strengthening image dependency. However, they are still subject to the language prior problem since the data biases have not been even alleviated. In this paper, we introduce a self-supervised learning framework to solve this problem. Concretely, we first automatically generate labeled data to balance the biased data, and propose a self-supervised auxiliary task to utilize the balanced data to assist the base VQA model to overcome language priors. Our method can compensate for the data biases by generating balanced data without introducing external annotations. Experimental results show that our method can significantly outperform the state-of-the-art, improving the overall accuracy from 49.50% to 57.59% on the most commonly used benchmark VQA-CP v2. In other words, we can increase the performance of annotation-based methods by 16% without using external annotations.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
212,661
1705.00894
Talking Open Data
Enticing users into exploring Open Data remains an important challenge for the whole Open Data paradigm. Standard stock interfaces often used by Open Data portals are anything but inspiring even for tech-savvy users, let alone those without an articulated interest in data science. To address a broader range of citizens, we designed an open data search interface supporting natural language interactions via popular platforms like Facebook and Skype. Our data-aware chatbot answers search requests and suggests relevant open datasets, bringing fun factor and a potential of viral dissemination into Open Data exploration. The current system prototype is available for Facebook (https://m.me/OpenDataAssistant) and Skype (https://join.skype.com/bot/6db830ca-b365-44c4-9f4d-d423f728e741) users.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
72,768
2106.01226
Semi-Supervised Semantic Segmentation with Cross Pseudo Supervision
In this paper, we study the semi-supervised semantic segmentation problem via exploring both labeled data and extra unlabeled data. We propose a novel consistency regularization approach, called cross pseudo supervision (CPS). Our approach imposes the consistency on two segmentation networks perturbed with different initialization for the same input image. The pseudo one-hot label map, output from one perturbed segmentation network, is used to supervise the other segmentation network with the standard cross-entropy loss, and vice versa. The CPS consistency has two roles: encourage high similarity between the predictions of two perturbed networks for the same input image, and expand training data by using the unlabeled data with pseudo labels. Experiment results show that our approach achieves the state-of-the-art semi-supervised segmentation performance on Cityscapes and PASCAL VOC 2012. Code is available at https://git.io/CPS.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
238,432
2012.09535
Digital Detox -- Mitigating Digital Overuse in Times of Remote Work and Social Isolation
Remote work arrangements and limited recreational options in times of social isolation increase the risk of digital overuse for individuals. Its consequences can range from impaired mental health to issues of technology addiction. A conformant countermovement has popularised digital detoxing, a practice that endorses to deliberately limit technology use to reduce digital involvement and physiological stress. In times of social isolation, however, digital networking may provide the principle access to social interactions. To provide empirical evidence about the sweet spot between mitigating digital overuse and perceived social connectedness, this paper proposes a mixed-methods design to scrutinise the impact of digital detox measures in a professional context. Possible results will help to better understand how digital overuse may effectively be mitigated by remote workers and what measures organisations can take to create a digital environment that supports employee satisfaction and mental health.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
212,100
2207.01960
A Safe Semi-supervised Graph Convolution Network
In the semi-supervised learning field, Graph Convolution Network (GCN), as a variant model of GNN, has achieved promising results for non-Euclidean data by introducing convolution into GNN. However, GCN and its variant models fail to safely use the information of risk unlabeled data, which will degrade the performance of semi-supervised learning. Therefore, we propose a Safe GCN framework (Safe-GCN) to improve the learning performance. In the Safe-GCN, we design an iterative process to label the unlabeled data. In each iteration, a GCN and its supervised version(S-GCN) are learned to find the unlabeled data with high confidence. The high-confidence unlabeled data and their pseudo labels are then added to the label set. Finally, both added unlabeled data and labeled ones are used to train a S-GCN which can achieve the safe exploration of the risk unlabeled data and enable safe use of large numbers of unlabeled data. The performance of Safe-GCN is evaluated on three well-known citation network datasets and the obtained results demonstrate the effectiveness of the proposed framework over several graph-based semi-supervised learning methods.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
306,356
1607.02565
Direct Sparse Odometry
We propose a novel direct sparse visual odometry formulation. It combines a fully direct probabilistic model (minimizing a photometric error) with consistent, joint optimization of all model parameters, including geometry -- represented as inverse depth in a reference frame -- and camera motion. This is achieved in real time by omitting the smoothness prior used in other direct methods and instead sampling pixels evenly throughout the images. Since our method does not depend on keypoint detectors or descriptors, it can naturally sample pixels from across all image regions that have intensity gradient, including edges or smooth intensity variations on mostly white walls. The proposed model integrates a full photometric calibration, accounting for exposure time, lens vignetting, and non-linear response functions. We thoroughly evaluate our method on three different datasets comprising several hours of video. The experiments show that the presented approach significantly outperforms state-of-the-art direct and indirect methods in a variety of real-world settings, both in terms of tracking accuracy and robustness.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
58,375
2105.06754
Learning Group Activities from Skeletons without Individual Action Labels
To understand human behavior we must not just recognize individual actions but model possibly complex group activity and interactions. Hierarchical models obtain the best results in group activity recognition but require fine grained individual action annotations at the actor level. In this paper we show that using only skeletal data we can train a state-of-the art end-to-end system using only group activity labels at the sequence level. Our experiments show that models trained without individual action supervision perform poorly. On the other hand we show that pseudo-labels can be computed from any pre-trained feature extractor with comparable final performance. Finally our carefully designed lean pose only architecture shows highly competitive results versus more complex multimodal approaches even in the self-supervised variant.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
235,222
2404.10292
From Data Deluge to Data Curation: A Filtering-WoRA Paradigm for Efficient Text-based Person Search
In text-based person search endeavors, data generation has emerged as a prevailing practice, addressing concerns over privacy preservation and the arduous task of manual annotation. Although the number of synthesized data can be infinite in theory, the scientific conundrum persists that how much generated data optimally fuels subsequent model training. We observe that only a subset of the data in these constructed datasets plays a decisive role. Therefore, we introduce a new Filtering-WoRA paradigm, which contains a filtering algorithm to identify this crucial data subset and WoRA (Weighted Low-Rank Adaptation) learning strategy for light fine-tuning. The filtering algorithm is based on the cross-modality relevance to remove the lots of coarse matching synthesis pairs. As the number of data decreases, we do not need to fine-tune the entire model. Therefore, we propose a WoRA learning strategy to efficiently update a minimal portion of model parameters. WoRA streamlines the learning process, enabling heightened efficiency in extracting knowledge from fewer, yet potent, data instances. Extensive experimentation validates the efficacy of pretraining, where our model achieves advanced and efficient retrieval performance on challenging real-world benchmarks. Notably, on the CUHK-PEDES dataset, we have achieved a competitive mAP of 67.02% while reducing model training time by 19.82%.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
447,031
2109.01359
CAM-loss: Towards Learning Spatially Discriminative Feature Representations
The backbone of traditional CNN classifier is generally considered as a feature extractor, followed by a linear layer which performs the classification. We propose a novel loss function, termed as CAM-loss, to constrain the embedded feature maps with the class activation maps (CAMs) which indicate the spatially discriminative regions of an image for particular categories. CAM-loss drives the backbone to express the features of target category and suppress the features of non-target categories or background, so as to obtain more discriminative feature representations. It can be simply applied in any CNN architecture with neglectable additional parameters and calculations. Experimental results show that CAM-loss is applicable to a variety of network structures and can be combined with mainstream regularization methods to improve the performance of image classification. The strong generalization ability of CAM-loss is validated in the transfer learning and few shot learning tasks. Based on CAM-loss, we also propose a novel CAAM-CAM matching knowledge distillation method. This method directly uses the CAM generated by the teacher network to supervise the CAAM generated by the student network, which effectively improves the accuracy and convergence rate of the student network.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
253,414
2407.07171
ItTakesTwo: Leveraging Peer Representations for Semi-supervised LiDAR Semantic Segmentation
The costly and time-consuming annotation process to produce large training sets for modelling semantic LiDAR segmentation methods has motivated the development of semi-supervised learning (SSL) methods. However, such SSL approaches often concentrate on employing consistency learning only for individual LiDAR representations. This narrow focus results in limited perturbations that generally fail to enable effective consistency learning. Additionally, these SSL approaches employ contrastive learning based on the sampling from a limited set of positive and negative embedding samples. This paper introduces a novel semi-supervised LiDAR semantic segmentation framework called ItTakesTwo (IT2). IT2 is designed to ensure consistent predictions from peer LiDAR representations, thereby improving the perturbation effectiveness in consistency learning. Furthermore, our contrastive learning employs informative samples drawn from a distribution of positive and negative embeddings learned from the entire training set. Results on public benchmarks show that our approach achieves remarkable improvements over the previous state-of-the-art (SOTA) methods in the field. The code is available at: https://github.com/yyliu01/IT2.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
471,663
2501.16730
Growing the Efficient Frontier on Panel Trees
We introduce a new class of tree-based models, P-Trees, for analyzing (unbalanced) panel of individual asset returns, generalizing high-dimensional sorting with economic guidance and interpretability. Under the mean-variance efficient framework, P-Trees construct test assets that significantly advance the efficient frontier compared to commonly used test assets, with alphas unexplained by benchmark pricing models. P-Tree tangency portfolios also constitute traded factors, recovering the pricing kernel and outperforming popular observable and latent factor models for investments and cross-sectional pricing. Finally, P-Trees capture the complexity of asset returns with sparsity, achieving out-of-sample Sharpe ratios close to those attained only by over-parameterized large models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
528,085
2310.17923
Multi-fingered Dynamic Grasping for Unknown Objects
Dexterous grasping of unseen objects in dynamic environments is an essential prerequisite for the advanced manipulation of autonomous robots. Prior advances rely on several assumptions that simplify the setup, including environment stationarity, pre-defined objects, and low-dimensional end-effectors. Though easing the problem and enabling progress, it undermined the complexity of the real world. Aiming to relax these assumptions, we present a dynamic grasping framework for unknown objects in this work, which uses a five-fingered hand with visual servo control and can compensate for external disturbances. To establish such a system on real hardware, we leverage the recent advances in real-time dexterous generative grasp synthesis and introduce several techniques to secure the robustness and performance of the overall system. Our experiments on real hardware verify the ability of the proposed system to reliably grasp unknown dynamic objects in two realistic scenarios: objects on a conveyor belt and human-robot handover. Note that there has been no prior work that can achieve dynamic multi-fingered grasping for unknown objects like ours up to the time of writing this paper. We hope our pioneering work in this direction can provide inspiration to the community and pave the way for further algorithmic and engineering advances on this challenging task. A video of the experiments is available at https://youtu.be/b87zGNoKELg.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
403,341
2103.16010
Theory-Guided Machine Learning for Process Simulation of Advanced Composites
Science-based simulation tools such as Finite Element (FE) models are routinely used in scientific and engineering applications. While their success is strongly dependent on our understanding of underlying governing physical laws, they suffer inherent limitations including trade-off between fidelity/accuracy and speed. The recent rise of Machine Learning (ML) proposes a theory-agnostic paradigm. In complex multi-physics problems, however, creating large enough datasets for successful training of ML models has proven to be challenging. One promising strategy to bridge the divide between these approaches and take advantage of their respective strengths is Theory-Guided Machine Learning (TGML) which aims to integrate physical laws into ML algorithms. In this paper, three case studies on thermal management during processing of advanced composites are presented and studied using FE, ML and TGML. A structured approach to incrementally adding increasingly complex physics to training of TGML model is presented. The benefits of TGML over ML models are seen in more accurate predictions, particularly outside the training region, and ability to train with small datasets. One benefit of TGML over FE is significant speed improvement to potentially develop real-time feedback systems. A recent successful implementation of a TGML model to assess producibility of aerospace composite parts is presented.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
227,418
2208.02817
Occupancy Planes for Single-view RGB-D Human Reconstruction
Single-view RGB-D human reconstruction with implicit functions is often formulated as per-point classification. Specifically, a set of 3D locations within the view-frustum of the camera are first projected independently onto the image and a corresponding feature is subsequently extracted for each 3D location. The feature of each 3D location is then used to classify independently whether the corresponding 3D point is inside or outside the observed object. This procedure leads to sub-optimal results because correlations between predictions for neighboring locations are only taken into account implicitly via the extracted features. For more accurate results we propose the occupancy planes (OPlanes) representation, which enables to formulate single-view RGB-D human reconstruction as occupancy prediction on planes which slice through the camera's view frustum. Such a representation provides more flexibility than voxel grids and enables to better leverage correlations than per-point classification. On the challenging S3D data we observe a simple classifier based on the OPlanes representation to yield compelling results, especially in difficult situations with partial occlusions due to other objects and partial visibility, which haven't been addressed by prior work.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
311,582
2410.21067
CRAT: A Multi-Agent Framework for Causality-Enhanced Reflective and Retrieval-Augmented Translation with Large Language Models
Large language models (LLMs) have shown great promise in machine translation, but they still struggle with contextually dependent terms, such as new or domain-specific words. This leads to inconsistencies and errors that are difficult to address. Existing solutions often depend on manual identification of such terms, which is impractical given the complexity and evolving nature of language. While Retrieval-Augmented Generation (RAG) could provide some assistance, its application to translation is limited by issues such as hallucinations from information overload. In this paper, we propose CRAT, a novel multi-agent translation framework that leverages RAG and causality-enhanced self-reflection to address these challenges. This framework consists of several specialized agents: the Unknown Terms Identification agent detects unknown terms within the context, the Knowledge Graph (KG) Constructor agent extracts relevant internal knowledge about these terms and retrieves bilingual information from external sources, the Causality-enhanced Judge agent validates the accuracy of the information, and the Translator agent incorporates the refined information into the final output. This automated process allows for more precise and consistent handling of key terms during translation. Our results show that CRAT significantly improves translation accuracy, particularly in handling context-sensitive terms and emerging vocabulary.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
503,077
2112.08459
Rethinking Nearest Neighbors for Visual Classification
Neural network classifiers have become the de-facto choice for current "pre-train then fine-tune" paradigms of visual classification. In this paper, we investigate k-Nearest-Neighbor (k-NN) classifiers, a classical model-free learning method from the pre-deep learning era, as an augmentation to modern neural network based approaches. As a lazy learning method, k-NN simply aggregates the distance between the test image and top-k neighbors in a training set. We adopt k-NN with pre-trained visual representations produced by either supervised or self-supervised methods in two steps: (1) Leverage k-NN predicted probabilities as indications for easy vs. hard examples during training. (2) Linearly interpolate the k-NN predicted distribution with that of the augmented classifier. Via extensive experiments on a wide range of classification tasks, our study reveals the generality and flexibility of k-NN integration with additional insights: (1) k-NN achieves competitive results, sometimes even outperforming a standard linear classifier. (2) Incorporating k-NN is especially beneficial for tasks where parametric classifiers perform poorly and / or in low-data regimes. We hope these discoveries will encourage people to rethink the role of pre-deep learning, classical methods in computer vision. Our code is available at: https://github.com/KMnP/nn-revisit.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
271,802
2305.11938
XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented Languages
Data scarcity is a crucial issue for the development of highly multilingual NLP systems. Yet for many under-represented languages (ULs) -- languages for which NLP re-search is particularly far behind in meeting user needs -- it is feasible to annotate small amounts of data. Motivated by this, we propose XTREME-UP, a benchmark defined by: its focus on the scarce-data scenario rather than zero-shot; its focus on user-centric tasks -- tasks with broad adoption by speakers of high-resource languages; and its focus on under-represented languages where this scarce-data scenario tends to be most realistic. XTREME-UP evaluates the capabilities of language models across 88 under-represented languages over 9 key user-centric technologies including ASR, OCR, MT, and information access tasks that are of general utility. We create new datasets for OCR, autocomplete, semantic parsing, and transliteration, and build on and refine existing datasets for other tasks. XTREME-UP provides methodology for evaluating many modeling scenarios including text-only, multi-modal (vision, audio, and text),supervised parameter tuning, and in-context learning. We evaluate commonly used models on the benchmark. We release all code and scripts to train and evaluate models
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
365,763
1403.4405
Absorbing Set Analysis and Design of LDPC Codes from Transversal Designs over the AWGN Channel
In this paper we construct low-density parity-check (LDPC) codes from transversal designs with low error-floors over the additive white Gaussian noise (AWGN) channel. The constructed codes are based on transversal designs that arise from sets of mutually orthogonal Latin squares (MOLS) with cyclic structure. For lowering the error-floors, our approach is twofold: First, we give an exhaustive classification of so-called absorbing sets that may occur in the factor graphs of the given codes. These purely combinatorial substructures are known to be the main cause of decoding errors in the error-floor region over the AWGN channel by decoding with the standard sum-product algorithm (SPA). Second, based on this classification, we exploit the specific structure of the presented codes to eliminate the most harmful absorbing sets and derive powerful constraints for the proper choice of code parameters in order to obtain codes with an optimized error-floor performance.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
31,647
2402.15925
MultiContrievers: Analysis of Dense Retrieval Representations
Dense retrievers compress source documents into (possibly lossy) vector representations, yet there is little analysis of what information is lost versus preserved, and how it affects downstream tasks. We conduct the first analysis of the information captured by dense retrievers compared to the language models they are based on (e.g., BERT versus Contriever). We use 25 MultiBert checkpoints as randomized initialisations to train MultiContrievers, a set of 25 contriever models. We test whether specific pieces of information -- such as gender and occupation -- can be extracted from contriever vectors of wikipedia-like documents. We measure this extractability via information theoretic probing. We then examine the relationship of extractability to performance and gender bias, as well as the sensitivity of these results to many random initialisations and data shuffles. We find that (1) contriever models have significantly increased extractability, but extractability usually correlates poorly with benchmark performance 2) gender bias is present, but is not caused by the contriever representations 3) there is high sensitivity to both random initialisation and to data shuffle, suggesting that future retrieval research should test across a wider spread of both.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
432,334
2303.02245
Exploring Self-Supervised Representation Learning For Low-Resource Medical Image Analysis
The success of self-supervised learning (SSL) has mostly been attributed to the availability of unlabeled yet large-scale datasets. However, in a specialized domain such as medical imaging which is a lot different from natural images, the assumption of data availability is unrealistic and impractical, as the data itself is scanty and found in small databases, collected for specific prognosis tasks. To this end, we seek to investigate the applicability of self-supervised learning algorithms on small-scale medical imaging datasets. In particular, we evaluate $4$ state-of-the-art SSL methods on three publicly accessible \emph{small} medical imaging datasets. Our investigation reveals that in-domain low-resource SSL pre-training can yield competitive performance to transfer learning from large-scale datasets (such as ImageNet). Furthermore, we extensively analyse our empirical findings to provide valuable insights that can motivate for further research towards circumventing the need for pre-training on a large image corpus. To the best of our knowledge, this is the first attempt to holistically explore self-supervision on low-resource medical datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
349,270
2104.09295
Fourier and Zak transforms of multiplicative characters
In this paper we derive formulas for the N-point discrete Fourier transform and the R1 x R2 finite Zak transform of multiplicative characters on Z/N, where N is an odd integer, and R1 and R2 are co-prime factors of N. In one special case this permits computation of the discrete Fourier transform and the finite Zak transform of the Jacobi symbol, the modified Jacobi sequence, and the Golomb sequence. In other cases, not addressed here, this permits computation of the discrete Fourier transform and the finite Zak transform of certain complex-valued sequences. These results constitute, to our knowledge, the first unified treatment of key Fourier and Zak space properties of multiplicative characters. These results also provide a convenient framework for the design of new character-based sequences.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
231,180
0807.3593
An outer bound for 2-receiver discrete memoryless broadcast channels
An outer bound to the two-receiver discrete memoryless broadcast channel is presented. We compare it to the known outer bounds and show that the outer bound presented is at least as tight as the existing bounds.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
2,106
2308.06398
A Compressive Sensing Based Method for Harmonic State Estimation
Power quality monitoring has become a vital need in modern power systems owing to the need for agile operation and troubleshooting scheme. On the other hand, the nature of load in modern power system is changing in many ways. Digital loads, which are mostly relied on power electronic equipment, may distort the quality of power flowing through the network. Moreover, one of the most critical objectives of smart grids is to improve quality of services delivered to customers, alongside with security, reliability and efficiency. To this end, a novel method based on compressive sensing is proposed in this paper to detect the source and the magnitude of the harmonics. The method takes advantages of compressive sensing theory in such a way that a real-time monitoring of harmonic distortion is obtained with a limited number of measurements. The efficacy of the method is checked by means of various simulations on IEEE 118 bus test system. The results show the capabilities of the method in both noisy and noise-free conditions.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
385,119
2310.10036
Evading Detection Actively: Toward Anti-Forensics against Forgery Localization
Anti-forensics seeks to eliminate or conceal traces of tampering artifacts. Typically, anti-forensic methods are designed to deceive binary detectors and persuade them to misjudge the authenticity of an image. However, to the best of our knowledge, no attempts have been made to deceive forgery detectors at the pixel level and mis-locate forged regions. Traditional adversarial attack methods cannot be directly used against forgery localization due to the following defects: 1) they tend to just naively induce the target forensic models to flip their pixel-level pristine or forged decisions; 2) their anti-forensics performance tends to be severely degraded when faced with the unseen forensic models; 3) they lose validity once the target forensic models are retrained with the anti-forensics images generated by them. To tackle the three defects, we propose SEAR (Self-supErvised Anti-foRensics), a novel self-supervised and adversarial training algorithm that effectively trains deep-learning anti-forensic models against forgery localization. SEAR sets a pretext task to reconstruct perturbation for self-supervised learning. In adversarial training, SEAR employs a forgery localization model as a supervisor to explore tampering features and constructs a deep-learning concealer to erase corresponding traces. We have conducted largescale experiments across diverse datasets. The experimental results demonstrate that, through the combination of self-supervised learning and adversarial learning, SEAR successfully deceives the state-of-the-art forgery localization methods, as well as tackle the three defects regarding traditional adversarial attack methods mentioned above.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
400,055
2309.06046
BatMan-CLR: Making Few-shots Meta-Learners Resilient Against Label Noise
The negative impact of label noise is well studied in classical supervised learning yet remains an open research question in meta-learning. Meta-learners aim to adapt to unseen learning tasks by learning a good initial model in meta-training and consecutively fine-tuning it according to new tasks during meta-testing. In this paper, we present the first extensive analysis of the impact of varying levels of label noise on the performance of state-of-the-art meta-learners, specifically gradient-based $N$-way $K$-shot learners. We show that the accuracy of Reptile, iMAML, and foMAML drops by up to 42% on the Omniglot and CifarFS datasets when meta-training is affected by label noise. To strengthen the resilience against label noise, we propose two sampling techniques, namely manifold (Man) and batch manifold (BatMan), which transform the noisy supervised learners into semi-supervised ones to increase the utility of noisy labels. We first construct manifold samples of $N$-way $2$-contrastive-shot tasks through augmentation, learning the embedding via a contrastive loss in meta-training, and then perform classification through zeroing on the embedding in meta-testing. We show that our approach can effectively mitigate the impact of meta-training label noise. Even with 60% wrong labels \batman and \man can limit the meta-testing accuracy drop to ${2.5}$, ${9.4}$, ${1.1}$ percent points, respectively, with existing meta-learners across the Omniglot, CifarFS, and MiniImagenet datasets.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
true
false
false
391,292
2007.04484
Transparency Tools for Fairness in AI (Luskin)
We propose new tools for policy-makers to use when assessing and correcting fairness and bias in AI algorithms. The three tools are: - A new definition of fairness called "controlled fairness" with respect to choices of protected features and filters. The definition provides a simple test of fairness of an algorithm with respect to a dataset. This notion of fairness is suitable in cases where fairness is prioritized over accuracy, such as in cases where there is no "ground truth" data, only data labeled with past decisions (which may have been biased). - Algorithms for retraining a given classifier to achieve "controlled fairness" with respect to a choice of features and filters. Two algorithms are presented, implemented and tested. These algorithms require training two different models in two stages. We experiment with combinations of various types of models for the first and second stage and report on which combinations perform best in terms of fairness and accuracy. - Algorithms for adjusting model parameters to achieve a notion of fairness called "classification parity". This notion of fairness is suitable in cases where accuracy is prioritized. Two algorithms are presented, one which assumes that protected features are accessible to the model during testing, and one which assumes protected features are not accessible during testing. We evaluate our tools on three different publicly available datasets. We find that the tools are useful for understanding various dimensions of bias, and that in practice the algorithms are effective in starkly reducing a given observed bias when tested on new data.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
186,366
1910.09324
Multi-dimensional Features for Prediction with Tweets
With the rise of opioid abuse in the US, there has been a growth of overlapping hotspots for overdose-related and HIV-related deaths in Springfield, Boston, Fall River, New Bedford, and parts of Cape Cod. With a large part of population, including rural communities, active on social media, it is crucial that we leverage the predictive power of social media as a preventive measure. We explore the predictive power of micro-blogging social media website Twitter with respect to HIV new diagnosis rates per county. While trending work in Twitter NLP has focused on primarily text-based features, we show that multi-dimensional feature construction can significantly improve the predictive power of topic features alone with respect STI's (sexually transmitted infections). By multi-dimensional features, we mean leveraging not only the topical features (text) of a corpus, but also location-based information (counties) about the tweets in feature-construction. We develop novel text-location-based smoothing features to predict new diagnoses of HIV.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
150,153
2112.15250
Benign Overfitting in Adversarially Robust Linear Classification
"Benign overfitting", where classifiers memorize noisy training data yet still achieve a good generalization performance, has drawn great attention in the machine learning community. To explain this surprising phenomenon, a series of works have provided theoretical justification in over-parameterized linear regression, classification, and kernel methods. However, it is not clear if benign overfitting still occurs in the presence of adversarial examples, i.e., examples with tiny and intentional perturbations to fool the classifiers. In this paper, we show that benign overfitting indeed occurs in adversarial training, a principled approach to defend against adversarial examples. In detail, we prove the risk bounds of the adversarially trained linear classifier on the mixture of sub-Gaussian data under $\ell_p$ adversarial perturbations. Our result suggests that under moderate perturbations, adversarially trained linear classifiers can achieve the near-optimal standard and adversarial risks, despite overfitting the noisy training data. Numerical experiments validate our theoretical findings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
273,721
2406.08080
AustroTox: A Dataset for Target-Based Austrian German Offensive Language Detection
Model interpretability in toxicity detection greatly profits from token-level annotations. However, currently such annotations are only available in English. We introduce a dataset annotated for offensive language detection sourced from a news forum, notable for its incorporation of the Austrian German dialect, comprising 4,562 user comments. In addition to binary offensiveness classification, we identify spans within each comment constituting vulgar language or representing targets of offensive statements. We evaluate fine-tuned language models as well as large language models in a zero- and few-shot fashion. The results indicate that while fine-tuned models excel in detecting linguistic peculiarities such as vulgar dialect, large language models demonstrate superior performance in detecting offensiveness in AustroTox. We publish the data and code.
false
false
false
false
true
false
false
false
true
false
false
false
false
true
false
false
false
false
463,342
2502.07855
Vision-Language Models for Edge Networks: A Comprehensive Survey
Vision Large Language Models (VLMs) combine visual understanding with natural language processing, enabling tasks like image captioning, visual question answering, and video analysis. While VLMs show impressive capabilities across domains such as autonomous vehicles, smart surveillance, and healthcare, their deployment on resource-constrained edge devices remains challenging due to processing power, memory, and energy limitations. This survey explores recent advancements in optimizing VLMs for edge environments, focusing on model compression techniques, including pruning, quantization, knowledge distillation, and specialized hardware solutions that enhance efficiency. We provide a detailed discussion of efficient training and fine-tuning methods, edge deployment challenges, and privacy considerations. Additionally, we discuss the diverse applications of lightweight VLMs across healthcare, environmental monitoring, and autonomous systems, illustrating their growing impact. By highlighting key design strategies, current challenges, and offering recommendations for future directions, this survey aims to inspire further research into the practical deployment of VLMs, ultimately making advanced AI accessible in resource-limited settings.
false
false
false
false
true
false
false
false
true
false
false
true
false
false
false
false
false
false
532,804
2109.06932
A Crawler Architecture for Harvesting the Clear, Social, and Dark Web for IoT-Related Cyber-Threat Intelligence
The clear, social, and dark web have lately been identified as rich sources of valuable cyber-security information that -given the appropriate tools and methods-may be identified, crawled and subsequently leveraged to actionable cyber-threat intelligence. In this work, we focus on the information gathering task, and present a novel crawling architecture for transparently harvesting data from security websites in the clear web, security forums in the social web, and hacker forums/marketplaces in the dark web. The proposed architecture adopts a two-phase approach to data harvesting. Initially a machine learning-based crawler is used to direct the harvesting towards websites of interest, while in the second phase state-of-the-art statistical language modelling techniques are used to represent the harvested information in a latent low-dimensional feature space and rank it based on its potential relevance to the task at hand. The proposed architecture is realised using exclusively open-source tools, and a preliminary evaluation with crowdsourced results demonstrates its effectiveness.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
255,319
2006.16981
Learning to Combine Top-Down and Bottom-Up Signals in Recurrent Neural Networks with Attention over Modules
Robust perception relies on both bottom-up and top-down signals. Bottom-up signals consist of what's directly observed through sensation. Top-down signals consist of beliefs and expectations based on past experience and short-term memory, such as how the phrase `peanut butter and~...' will be completed. The optimal combination of bottom-up and top-down information remains an open question, but the manner of combination must be dynamic and both context and task dependent. To effectively utilize the wealth of potential top-down information available, and to prevent the cacophony of intermixed signals in a bidirectional architecture, mechanisms are needed to restrict information flow. We explore deep recurrent neural net architectures in which bottom-up and top-down signals are dynamically combined using attention. Modularity of the architecture further restricts the sharing and communication of information. Together, attention and modularity direct information flow, which leads to reliable performance improvements in perceptual and language tasks, and in particular improves robustness to distractions and noisy data. We demonstrate on a variety of benchmarks in language modeling, sequential image classification, video prediction and reinforcement learning that the \emph{bidirectional} information flow can improve results over strong baselines.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
184,967
1512.07502
Convolutional Architecture Exploration for Action Recognition and Image Classification
Convolutional Architecture for Fast Feature Encoding (CAFFE) [11] is a software package for the training, classifying, and feature extraction of images. The UCF Sports Action dataset is a widely used machine learning dataset that has 200 videos taken in 720x480 resolution of 9 different sporting activities: diving, golf, swinging, kicking, lifting, horseback riding, running, skateboarding, swinging (various gymnastics), and walking. In this report we report on a caffe feature extraction pipeline of images taken from the videos of the UCF Sports Action dataset. A similar test was performed on overfeat, and results were inferior to caffe. This study is intended to explore the architecture and hyper parameters needed for effective static analysis of action in videos and classification over a variety of image datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
50,426
2305.16048
UFO: Unified Fact Obtaining for Commonsense Question Answering
Leveraging external knowledge to enhance the reasoning ability is crucial for commonsense question answering. However, the existing knowledge bases heavily rely on manual annotation which unavoidably causes deficiency in coverage of world-wide commonsense knowledge. Accordingly, the knowledge bases fail to be flexible enough to support the reasoning over diverse questions. Recently, large-scale language models (LLMs) have dramatically improved the intelligence in capturing and leveraging knowledge, which opens up a new way to address the issue of eliciting knowledge from language models. We propose a Unified Facts Obtaining (UFO) approach. UFO turns LLMs into knowledge sources and produces relevant facts (knowledge statements) for the given question. We first develop a unified prompt consisting of demonstrations that cover different aspects of commonsense and different question styles. On this basis, we instruct the LLMs to generate question-related supporting facts for various commonsense questions via prompting. After facts generation, we apply a dense retrieval-based fact selection strategy to choose the best-matched fact. This kind of facts will be fed into the answer inference model along with the question. Notably, due to the design of unified prompts, UFO can support reasoning in various commonsense aspects (including general commonsense, scientific commonsense, and social commonsense). Extensive experiments on CommonsenseQA 2.0, OpenBookQA, QASC, and Social IQA benchmarks show that UFO significantly improves the performance of the inference model and outperforms manually constructed knowledge sources.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
367,897