id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2103.16387
The Unfolding Structure of Arguments in Online Debates: The case of a No-Deal Brexit
In the last decade, political debates have progressively shifted to social media. Rhetorical devices employed by online actors and factions that operate in these debating arenas can be captured and analysed to conduct a statistical reading of societal controversies and their argumentation dynamics. In this paper, we propose a five-step methodology, to extract, categorize and explore the latent argumentation structures of online debates. Using Twitter data about a "no-deal" Brexit, we focus on the expected effects in case of materialisation of this event. First, we extract cause-effect claims contained in tweets using RegEx that exploit verbs related to Creation, Destruction and Causation. Second, we categorise extracted "no-deal" effects using a Structural Topic Model estimated on unigrams and bigrams. Third, we select controversial effect topics and explore within-topic argumentation differences between self-declared partisan user factions. We hence type topics using estimated covariate effects on topic propensities, then, using the topics correlation network, we study the topological structure of the debate to identify coherent topical constellations. Finally, we analyse the debate time dynamics and infer lead/follow relations among factions. Results show that the proposed methodology can be employed to perform a statistical rhetorics analysis of debates, and map the architecture of controversies across time. In particular, the "no-deal" Brexit debate is shown to have an assortative argumentation structure heavily characterized by factional constellations of arguments, as well as by polarized narrative frames invoked through verbs related to Creation and Destruction. Our findings highlight the benefits of implementing a systemic approach to the analysis of debates, which allows the unveiling of topical and factional dependencies between arguments employed in online debates.
false
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
227,569
2010.11973
Rediscovering the Slavic Continuum in Representations Emerging from Neural Models of Spoken Language Identification
Deep neural networks have been employed for various spoken language recognition tasks, including tasks that are multilingual by definition such as spoken language identification. In this paper, we present a neural model for Slavic language identification in speech signals and analyze its emergent representations to investigate whether they reflect objective measures of language relatedness and/or non-linguists' perception of language similarity. While our analysis shows that the language representation space indeed captures language relatedness to a great extent, we find perceptual confusability between languages in our study to be the best predictor of the language representation similarity.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
202,497
1112.6371
Multi-q Analysis of Image Patterns
This paper studies the use of the Tsallis Entropy versus the classic Boltzmann-Gibbs-Shannon entropy for classifying image patterns. Given a database of 40 pattern classes, the goal is to determine the class of a given image sample. Our experiments show that the Tsallis entropy encoded in a feature vector for different $q$ indices has great advantage over the Boltzmann-Gibbs-Shannon entropy for pattern classification, boosting recognition rates by a factor of 3. We discuss the reasons behind this success, shedding light on the usefulness of the Tsallis entropy.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
13,619
1406.0074
Combined Approach for Image Segmentation
Many image segmentation techniques have been developed over the past two decades for segmenting the images, which help for object recognition, occlusion boundary estimation within motion or stereo systems, image compression, image editing. In this, there is a combined approach for segmenting the image. By using histogram equalization to the input image, from which it gives contrast enhancement output image .After that by applying median filtering,which will remove noise from contrast output image . At last I applied fuzzy c-mean clustering algorithm to denoising output image, which give segmented output image. In this way it produce better segmented image with less computation time.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
33,519
2209.07420
Scalable Task-Driven Robotic Swarm Control via Collision Avoidance and Learning Mean-Field Control
In recent years, reinforcement learning and its multi-agent analogue have achieved great success in solving various complex control problems. However, multi-agent reinforcement learning remains challenging both in its theoretical analysis and empirical design of algorithms, especially for large swarms of embodied robotic agents where a definitive toolchain remains part of active research. We use emerging state-of-the-art mean-field control techniques in order to convert many-agent swarm control into more classical single-agent control of distributions. This allows profiting from advances in single-agent reinforcement learning at the cost of assuming weak interaction between agents. However, the mean-field model is violated by the nature of real systems with embodied, physically colliding agents. Thus, we combine collision avoidance and learning of mean-field control into a unified framework for tractably designing intelligent robotic swarm behavior. On the theoretical side, we provide novel approximation guarantees for general mean-field control both in continuous spaces and with collision avoidance. On the practical side, we show that our approach outperforms multi-agent reinforcement learning and allows for decentralized open-loop application while avoiding collisions, both in simulation and real UAV swarms. Overall, we propose a framework for the design of swarm behavior that is both mathematically well-founded and practically useful, enabling the solution of otherwise intractable swarm problems.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
317,741
2210.09997
Bag All You Need: Learning a Generalizable Bagging Strategy for Heterogeneous Objects
We introduce a practical robotics solution for the task of heterogeneous bagging, requiring the placement of multiple rigid and deformable objects into a deformable bag. This is a difficult task as it features complex interactions between multiple highly deformable objects under limited observability. To tackle these challenges, we propose a robotic system consisting of two learned policies: a rearrangement policy that learns to place multiple rigid objects and fold deformable objects in order to achieve desirable pre-bagging conditions, and a lifting policy to infer suitable grasp points for bi-manual bag lifting. We evaluate these learned policies on a real-world three-arm robot platform that achieves a 70% heterogeneous bagging success rate with novel objects. To facilitate future research and comparison, we also develop a novel heterogeneous bagging simulation benchmark that will be made publicly available.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
324,751
cs/0608105
Application Layer Definition and Analyses of Controller Area Network Bus for Wire Harness Assembly Machine
With the feature of multi-master bus access, nondestructive contention-based arbitration and flexible configuration, Controller Area Network (CAN) bus is applied into the control system of Wire Harness Assembly Machine (WHAM). To accomplish desired goal, the specific features of the CAN bus is analyzed by compared with other field buses and the functional performances in the CAN bus system of WHAM is discussed. Then the application layer planning of CAN bus for dynamic priority is presented. The critical issue for the use of CAN bus system in WHAM is the data transfer rate between different nodes. So processing efficient model is introduced to assist analyzing data transfer procedure. Through the model, it is convenient to verify the real time feature of the CAN bus system in WHAM.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
539,660
2102.00527
A Runtime-Based Computational Performance Predictor for Deep Neural Network Training
Deep learning researchers and practitioners usually leverage GPUs to help train their deep neural networks (DNNs) faster. However, choosing which GPU to use is challenging both because (i) there are many options, and (ii) users grapple with competing concerns: maximizing compute performance while minimizing costs. In this work, we present a new practical technique to help users make informed and cost-efficient GPU selections: make performance predictions with the help of a GPU that the user already has. Our technique exploits the observation that, because DNN training consists of repetitive compute steps, predicting the execution time of a single iteration is usually enough to characterize the performance of an entire training process. We make predictions by scaling the execution time of each operation in a training iteration from one GPU to another using either (i) wave scaling, a technique based on a GPU's execution model, or (ii) pre-trained multilayer perceptrons. We implement our technique into a Python library called Habitat and find that it makes accurate iteration execution time predictions (with an average error of 11.8%) on ResNet-50, Inception v3, the Transformer, GNMT, and DCGAN across six different GPU architectures. Habitat supports PyTorch, is easy to use, and is open source.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
217,817
2011.04566
MPRNet: Multi-Path Residual Network for Lightweight Image Super Resolution
Lightweight super resolution networks have extremely importance for real-world applications. In recent years several SR deep learning approaches with outstanding achievement have been introduced by sacrificing memory and computational cost. To overcome this problem, a novel lightweight super resolution network is proposed, which improves the SOTA performance in lightweight SR and performs roughly similar to computationally expensive networks. Multi-Path Residual Network designs with a set of Residual concatenation Blocks stacked with Adaptive Residual Blocks: ($i$) to adaptively extract informative features and learn more expressive spatial context information; ($ii$) to better leverage multi-level representations before up-sampling stage; and ($iii$) to allow an efficient information and gradient flow within the network. The proposed architecture also contains a new attention mechanism, Two-Fold Attention Module, to maximize the representation ability of the model. Extensive experiments show the superiority of our model against other SOTA SR approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
205,623
2107.02421
An NLG pipeline for a legal expert system: a work in progress
We present the NLG component for L4, a prototype domain-specific language (DSL) for drafting laws and contracts. As a concrete use case, we describe a pipeline for a legal expert system created from L4 code. The NLG component is used in two steps. The first step is to create an interview, whose answers are processed into a query for an automated reasoner. The second step is to render the answers of the reasoner in natural language.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
244,812
1307.1275
Constructing Hierarchical Image-tags Bimodal Representations for Word Tags Alternative Choice
This paper describes our solution to the multi-modal learning challenge of ICML. This solution comprises constructing three-level representations in three consecutive stages and choosing correct tag words with a data-specific strategy. Firstly, we use typical methods to obtain level-1 representations. Each image is represented using MPEG-7 and gist descriptors with additional features released by the contest organizers. And the corresponding word tags are represented by bag-of-words model with a dictionary of 4000 words. Secondly, we learn the level-2 representations using two stacked RBMs for each modality. Thirdly, we propose a bimodal auto-encoder to learn the similarities/dissimilarities between the pairwise image-tags as level-3 representations. Finally, during the test phase, based on one observation of the dataset, we come up with a data-specific strategy to choose the correct tag words leading to a leap of an improved overall performance. Our final average accuracy on the private test set is 100%, which ranks the first place in this challenge.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
25,611
2305.07988
Reconstruct Before Summarize: An Efficient Two-Step Framework for Condensing and Summarizing Meeting Transcripts
Meetings typically involve multiple participants and lengthy conversations, resulting in redundant and trivial content. To overcome these challenges, we propose a two-step framework, Reconstruct before Summarize (RbS), for effective and efficient meeting summarization. RbS first leverages a self-supervised paradigm to annotate essential contents by reconstructing the meeting transcripts. Secondly, we propose a relative positional bucketing (RPB) algorithm to equip (conventional) summarization models to generate the summary. Despite the additional reconstruction process, our proposed RPB significantly compressed the input, leading to faster processing and reduced memory consumption compared to traditional summarization methods. We validate the effectiveness and efficiency of our method through extensive evaluations and analysis. On two meeting summarization datasets, AMI and ICSI, our approach outperforms previous state-of-the-art approaches without relying on large-scale pre-training or expert-grade annotating tools.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
364,112
2306.00600
Rotating Features for Object Discovery
The binding problem in human cognition, concerning how the brain represents and connects objects within a fixed network of neural connections, remains a subject of intense debate. Most machine learning efforts addressing this issue in an unsupervised setting have focused on slot-based methods, which may be limiting due to their discrete nature and difficulty to express uncertainty. Recently, the Complex AutoEncoder was proposed as an alternative that learns continuous and distributed object-centric representations. However, it is only applicable to simple toy data. In this paper, we present Rotating Features, a generalization of complex-valued features to higher dimensions, and a new evaluation procedure for extracting objects from distributed representations. Additionally, we show the applicability of our approach to pre-trained features. Together, these advancements enable us to scale distributed object-centric representations from simple toy to real-world data. We believe this work advances a new paradigm for addressing the binding problem in machine learning and has the potential to inspire further innovation in the field.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
370,082
2101.10141
Performance Evaluation of Convolutional Neural Networks for Gait Recognition
In this paper, a performance evaluation of well-known deep learning models in gait recognition is presented. For this purpose, the transfer learning scheme is adopted to pre-trained models in order to fit the models to the CASIA-B dataset for solving a gait recognition task. In this context, 18 popular Convolutional Neural Networks (CNNs), were re-trained using Gait Energy Images (GEIs) of CASIA-B containing almost 14000 images of 124 classes under various conditions, and their performance was studied in terms of accuracy. Moreover, the performance of the studied models is managed to be explained by examining the parts of the images being considered by the models towards providing their decisions. The experimental results are very promising since almost all the models achieved a high accuracy of over 90%, which is robust to the increasing number of classes. Furthermore, an important outcome of this study is the fact that a recognition problem can be effectively solved by using CNNs pre-trained to different problems, thus eliminating the need for customized model design.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
216,830
2010.08518
Adaptive Feature Selection for End-to-End Speech Translation
Information in speech signals is not evenly distributed, making it an additional challenge for end-to-end (E2E) speech translation (ST) to learn to focus on informative features. In this paper, we propose adaptive feature selection (AFS) for encoder-decoder based E2E ST. We first pre-train an ASR encoder and apply AFS to dynamically estimate the importance of each encoded speech feature to SR. A ST encoder, stacked on top of the ASR encoder, then receives the filtered features from the (frozen) ASR encoder. We take L0DROP (Zhang et al., 2020) as the backbone for AFS, and adapt it to sparsify speech features with respect to both temporal and feature dimensions. Results on LibriSpeech En-Fr and MuST-C benchmarks show that AFS facilitates learning of ST by pruning out ~84% temporal features, yielding an average translation gain of ~1.3-1.6 BLEU and a decoding speedup of ~1.4x. In particular, AFS reduces the performance gap compared to the cascade baseline, and outperforms it on LibriSpeech En-Fr with a BLEU score of 18.56 (without data augmentation)
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
201,201
2308.05667
2D3D-MATR: 2D-3D Matching Transformer for Detection-free Registration between Images and Point Clouds
The commonly adopted detect-then-match approach to registration finds difficulties in the cross-modality cases due to the incompatible keypoint detection and inconsistent feature description. We propose, 2D3D-MATR, a detection-free method for accurate and robust registration between images and point clouds. Our method adopts a coarse-to-fine pipeline where it first computes coarse correspondences between downsampled patches of the input image and the point cloud and then extends them to form dense correspondences between pixels and points within the patch region. The coarse-level patch matching is based on transformer which jointly learns global contextual constraints with self-attention and cross-modality correlations with cross-attention. To resolve the scale ambiguity in patch matching, we construct a multi-scale pyramid for each image patch and learn to find for each point patch the best matching image patch at a proper resolution level. Extensive experiments on two public benchmarks demonstrate that 2D3D-MATR outperforms the previous state-of-the-art P2-Net by around $20$ percentage points on inlier ratio and over $10$ points on registration recall. Our code and models are available at https://github.com/minhaolee/2D3DMATR.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
384,872
2402.07595
Comparative Analysis of ImageNet Pre-Trained Deep Learning Models and DINOv2 in Medical Imaging Classification
Medical image analysis frequently encounters data scarcity challenges. Transfer learning has been effective in addressing this issue while conserving computational resources. The recent advent of foundational models like the DINOv2, which uses the vision transformer architecture, has opened new opportunities in the field and gathered significant interest. However, DINOv2's performance on clinical data still needs to be verified. In this paper, we performed a glioma grading task using three clinical modalities of brain MRI data. We compared the performance of various pre-trained deep learning models, including those based on ImageNet and DINOv2, in a transfer learning context. Our focus was on understanding the impact of the freezing mechanism on performance. We also validated our findings on three other types of public datasets: chest radiography, fundus radiography, and dermoscopy. Our findings indicate that in our clinical dataset, DINOv2's performance was not as strong as ImageNet-based pre-trained models, whereas in public datasets, DINOv2 generally outperformed other models, especially when using the frozen mechanism. Similar performance was observed with various sizes of DINOv2 models across different tasks. In summary, DINOv2 is viable for medical image classification tasks, particularly with data resembling natural images. However, its effectiveness may vary with data that significantly differs from natural images such as MRI. In addition, employing smaller versions of the model can be adequate for medical task, offering resource-saving benefits. Our codes are available at https://github.com/GuanghuiFU/medical_DINOv2_eval.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
428,775
2206.10459
Device for measuring the plant physiology and electrophysiology
This paper briefly describes the device - the phytosensor - for measuring physiological and electrophysiological parameters of plants. This system is developed as a bio-physiological sensor in precise agriculture, as a tool in plant research and environmental biology, and for plant enthusiasts in smart home or entertainment applications. The phytosentor measures main physiological parameters such as the leaf transpiration rate, sap flow, tissue conductivity and frequency response, biopotentials (action potentials and variation potentials), and can conduct electrochemical impedance spectroscopy with organic tissues. Soil moisture and temperature, air quality (CO2, NO2, O3 and other sensors on I2C bus), and general environmental parameters (light, temperature, humidity, air pressure, electromagnetic and magnetic fields) are also recorded in real time. In addition to phytosensing, the device can also perform phytoactuation, i.e. execute electrical or light stimulation of plants, control irrigation and lighting modes, conduct fully autonomous experiments with complex feedback-based and adaptive scenarios in robotic or biohybrid systems. This article represents the revised and extended version of original paper and includes some descriptions and images from the FloraRobotica and BioHybrids projects.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
303,915
2501.10782
ML-SceGen: A Multi-level Scenario Generation Framework
Current scientific research witnesses various attempts at applying Large Language Models for scenario generation but is inclined only to comprehensive or dangerous scenarios. In this paper, we seek to build a three-stage framework that not only lets users regain controllability over the generated scenarios but also generates comprehensive scenarios containing danger factors in uncontrolled intersection settings. In the first stage, LLM agents will contribute to translating the key components of the description of the expected scenarios into Functional Scenarios. For the second stage, we use Answer Set Programming (ASP) solver Clingo to help us generate comprehensive logical traffic within intersections. During the last stage, we use LLM to update relevant parameters to increase the critical level of the concrete scenario.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
525,657
2205.06499
MARE: Semantic Supply Chain Disruption Management and Resilience Evaluation Framework
Supply Chains (SCs) are subject to disruptive events that potentially hinder the operational performance. Disruption Management Process (DMP) relies on the analysis of integrated heterogeneous data sources such as production scheduling, order management and logistics to evaluate the impact of disruptions on the SC. Existing approaches are limited as they address DMP process steps and corresponding data sources in a rather isolated manner which hurdles the systematic handling of a disruption originating anywhere in the SC. Thus, we propose MARE a semantic disruption management and resilience evaluation framework for integration of data sources included in all DMP steps, i.e. Monitor/Model, Assess, Recover and Evaluate. MARE, leverages semantic technologies i.e. ontologies, knowledge graphs and SPARQL queries to model and reproduce SC behavior under disruptive scenarios. Also, MARE includes an evaluation framework to examine the restoration performance of a SC applying various recovery strategies. Semantic SC DMP, put forward by MARE, allows stakeholders to potentially identify the measures to enhance SC integration, increase the resilience of supply networks and ultimately facilitate digitalization.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
296,261
1605.05139
Two Roads Diverged: A Semantic Network Analysis of Guanxi on Twitter
Guanxi, roughly translated as "social connection", is a term commonly used in the Chinese language. In this research, we employed a linguistic approach to explore popular discourses on Guanxi. Although sharing the same Confucian roots, Chinese communities inside and outside Mainland China have undergone different historical trajectories. Hence, we took a comparative approach to examine guanxi in Mainland China and in Taiwan, Hong Kong, and Macau (TW-HK-M). Comparing guanxi discourses in two Chinese societies aims at revealing the divergence of guanxi culture. The data for this research were collected on Twitter over a three-week period by searching tweets containing guanxi written in Simplified Chinese characters and in Traditional Chinese characters. After building, visualising, and conducting community detection on both semantic networks, two guanxi discourses were then compared in terms of their major concept sub-communities. This research aims at addressing two questions: Has the meaning of guanxi transformed in contemporary Chinese societies? And how do different socio-economic configurations affect the practice of guanxi? Results suggest that guanxi in interpersonal relationships has adapted to a new family structure in both Chinese societies. In addition, the practice of guanxi in business varies in Mainland China and in TW-HK-M. Furthermore, an extended domain was identified where guanxi is used in a macro-level discussion of state relations. Network representations of the guanxi discourses enabled reification of the concept and shed lights on the understanding of social connections and social orders in contemporary China.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
55,958
2401.04474
Combining Embedding-Based and Semantic-Based Models for Post-hoc Explanations in Recommender Systems
In today's data-rich environment, recommender systems play a crucial role in decision support systems. They provide to users personalized recommendations and explanations about these recommendations. Embedding-based models, despite their widespread use, often suffer from a lack of interpretability, which can undermine trust and user engagement. This paper presents an approach that combines embedding-based and semantic-based models to generate post-hoc explanations in recommender systems, leveraging ontology-based knowledge graphs to improve interpretability and explainability. By organizing data within a structured framework, ontologies enable the modeling of intricate relationships between entities, which is essential for generating explanations. By combining embedding-based and semantic based models for post-hoc explanations in recommender systems, the framework we defined aims at producing meaningful and easy-to-understand explanations, enhancing user trust and satisfaction, and potentially promoting the adoption of recommender systems across the e-commerce sector.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
420,442
2408.01438
AI for All: Identifying AI incidents Related to Diversity and Inclusion
The rapid expansion of Artificial Intelligence (AI) technologies has introduced both significant advancements and challenges, with diversity and inclusion (D&I) emerging as a critical concern. Addressing D&I in AI is essential to reduce biases and discrimination, enhance fairness, and prevent adverse societal impacts. Despite its importance, D&I considerations are often overlooked, resulting in incidents marked by built-in biases and ethical dilemmas. Analyzing AI incidents through a D&I lens is crucial for identifying causes of biases and developing strategies to mitigate them, ensuring fairer and more equitable AI technologies. However, systematic investigations of D&I-related AI incidents are scarce. This study addresses these challenges by identifying and understanding D&I issues within AI systems through a manual analysis of AI incident databases (AIID and AIAAIC). The research develops a decision tree to investigate D&I issues tied to AI incidents and populate a public repository of D&I-related AI incidents. The decision tree was validated through a card sorting exercise and focus group discussions. The research demonstrates that almost half of the analyzed AI incidents are related to D&I, with a notable predominance of racial, gender, and age discrimination. The decision tree and resulting public repository aim to foster further research and responsible AI practices, promoting the development of inclusive and equitable AI systems.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
478,235
2010.00829
Continuous close-range 3D object pose estimation
In the context of future manufacturing lines, removing fixtures will be a fundamental step to increase the flexibility of autonomous systems in assembly and logistic operations. Vision-based 3D pose estimation is a necessity to accurately handle objects that might not be placed at fixed positions during the robot task execution. Industrial tasks bring multiple challenges for the robust pose estimation of objects such as difficult object properties, tight cycle times and constraints on camera views. In particular, when interacting with objects, we have to work with close-range partial views of objects that pose a new challenge for typical view-based pose estimation methods. In this paper, we present a 3D pose estimation method based on a gradient-ascend particle filter that integrates new observations on-the-fly to improve the pose estimate. Thereby, we can apply this method online during task execution to save valuable cycle time. In contrast to other view-based pose estimation methods, we model potential views in full 6- dimensional space that allows us to cope with close-range partial objects views. We demonstrate the approach on a real assembly task, in which the algorithm usually converges to the correct pose within 10-15 iterations with an average accuracy of less than 8mm.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
198,412
2207.12027
Non-cascaded Control Barrier Functions for the Safe Control of Quadrotors
Researchers have developed various cascaded controllers and non-cascaded controllers for the navigation and control of quadrotors in recent years. It is vital to ensure the safety of a quadrotor both in normal state and in abnormal state if a controller tends to make the quadrotor unsafe. To this end, this paper proposes a non-cascaded Control Barrier Function (CBF) for a quadrotor controlled by either cascaded controllers or a non-cascaded controller. Incorporated with a Quadratic Programming (QP), the non-cascaded CBF can simultaneously regulate the magnitude of the total thrust and the torque of the quadrotor determined a controller, so as to ensure the safety of the quadrotor both in normal state and in abnormal state. The non-cascaded CBF establishes a non-conservative forward invariant safe region, in which the controller of a quadrotor is fully or partially effective in the navigation or the pose control of the quadrotor. The non-cascaded CBF is applied to a quadrotor performing trajectory tracking and a quadrotor performing aggressive roll maneuvers in simulations to evaluate the effectiveness of the non-cascaded CBF.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
309,862
2404.01438
Generation and Detection of Sign Language Deepfakes - A Linguistic and Visual Analysis
This research explores the positive application of deepfake technology for upper body generation, specifically sign language for the Deaf and Hard of Hearing (DHoH) community. Given the complexity of sign language and the scarcity of experts, the generated videos are vetted by a sign language expert for accuracy. We construct a reliable deepfake dataset, evaluating its technical and visual credibility using computer vision and natural language processing models. The dataset, consisting of over 1200 videos featuring both seen and unseen individuals, is also used to detect deepfake videos targeting vulnerable individuals. Expert annotations confirm that the generated videos are comparable to real sign language content. Linguistic analysis, using textual similarity scores and interpreter evaluations, shows that the interpretation of generated videos is at least 90% similar to authentic sign language. Visual analysis demonstrates that convincingly realistic deepfakes can be produced, even for new subjects. Using a pose/style transfer model, we pay close attention to detail, ensuring hand movements are accurate and align with the driving video. We also apply machine learning algorithms to establish a baseline for deepfake detection on this dataset, contributing to the detection of fraudulent sign language videos.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
443,417
2309.00055
Precision Enhancement of Distribution System State Estimation via Tri-Objective Micro Phasor Measurement Unit Deployment
A tri-objective optimal Micro Phasor Measurement Units ({\mu}-PMUs) Placement method is presented, with a focus on minimizing the following three parameters: i) the total number of {\mu}-PMU channels, (ii) the maximum state estimation uncertainty, and (iii) the sensitivity of state estimation to line parameter tolerances. The suggested formulation takes single-line and {\mu}-PMU failures into consideration while guaranteeing the complete observability of the system in the presence and absence of contingencies. It also takes into account the impact of zero injection nodes and the quantity of {\mu}-PMU channels carried out at every node. The suggested placement issue is addressed using a customized version of the nondominated sorting genetic algorithm II (NSGA-II). According to the results achieved utilizing three test systems of varying sizes, {\mu}-PMU channels beyond predetermined thresholds only result in higher costs and negligible further decreases in state estimation uncertainty and sensitivity to line parameter tolerances. Additionally, we may omit to instrument between 30 and 40% of buses if {\mu}-PMUs with only two three-phase channels are utilized, with only a modest negative effect on state estimate performance even in the event of contingencies.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
389,181
cs/0302001
Many Hard Examples in Exact Phase Transitions with Application to Generating Hard Satisfiable Instances
This paper first analyzes the resolution complexity of two random CSP models (i.e. Model RB/RD) for which we can establish the existence of phase transitions and identify the threshold points exactly. By encoding CSPs into CNF formulas, it is proved that almost all instances of Model RB/RD have no tree-like resolution proofs of less than exponential size. Thus, we not only introduce new families of CNF formulas hard for resolution, which is a central task of Proof-Complexity theory, but also propose models with both many hard instances and exact phase transitions. Then, the implications of such models are addressed. It is shown both theoretically and experimentally that an application of Model RB/RD might be in the generation of hard satisfiable instances, which is not only of practical importance but also related to some open problems in cryptography such as generating one-way functions. Subsequently, a further theoretical support for the generation method is shown by establishing exponential lower bounds on the complexity of solving random satisfiable and forced satisfiable instances of RB/RD near the threshold. Finally, conclusions are presented, as well as a detailed comparison of Model RB/RD with the Hamiltonian cycle problem and random 3-SAT, which, respectively, exhibit three different kinds of phase transition behavior in NP-complete problems.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
537,787
2002.03266
Weakly-Supervised Multi-Person Action Recognition in 360$^{\circ}$ Videos
The recent development of commodity 360$^{\circ}$ cameras have enabled a single video to capture an entire scene, which endows promising potentials in surveillance scenarios. However, research in omnidirectional video analysis has lagged behind the hardware advances. In this work, we address the important problem of action recognition in top-view 360$^{\circ}$ videos. Due to the wide filed-of-view, 360$^{\circ}$ videos usually capture multiple people performing actions at the same time. Furthermore, the appearance of people are deformed. The proposed framework first transforms omnidirectional videos into panoramic videos, then it extracts spatial-temporal features using region-based 3D CNNs for action recognition. We propose a weakly-supervised method based on multi-instance multi-label learning, which trains the model to recognize and localize multiple actions in a video using only video-level action labels as supervision. We perform experiments to quantitatively validate the efficacy of the proposed method and qualitatively demonstrate action localization results. To enable research in this direction, we introduce 360Action, the first omnidirectional video dataset for multi-person action recognition.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
163,203
2408.14480
Handling abort commands for household kitchen robots
We propose a solution for handling abort commands given to robots. The solution is exemplified with a running scenario with household kitchen robots. The robot uses planning to find sequences of actions that must be performed in order to gracefully cancel a previously received command. The Planning Domain Definition Language (PDDL) is used to write a domain to model kitchen activities and behaviours, and this domain is enriched with knowledge from online ontologies and knowledge graphs, like DBPedia. We discuss the results obtained in different scenarios.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
483,554
1507.03289
Optimal Multi-Robot Path Planning on Graphs: Structure and Computational Complexity
We study the problem of optimal multi-robot path planning on graphs (MPP) over four distinct minimization objectives: the total arrival time, the makespan (last arrival time), the total distance, and the maximum (single-robot traveled) distance. On the structure side, we show that each pair of these four objectives induces a Pareto front and cannot always be optimized simultaneously. Then, through reductions from 3-SAT, we further establish that computation over each objective is an NP-hard task, providing evidence that solving MPP optimally is generally intractable. Nevertheless, in a related paper, we design complete algorithms and efficient heuristics for optimizing all four objectives, capable of solving MPP optimally or near-optimally for hundreds of robots in challenging setups.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
45,070
2303.13157
Adiabatic replay for continual learning
Conventional replay-based approaches to continual learning (CL) require, for each learning phase with new data, the replay of samples representing all of the previously learned knowledge in order to avoid catastrophic forgetting. Since the amount of learned knowledge grows over time in CL problems, generative replay spends an increasing amount of time just re-learning what is already known. In this proof-of-concept study, we propose a replay-based CL strategy that we term adiabatic replay (AR), which derives its efficiency from the (reasonable) assumption that each new learning phase is adiabatic, i.e., represents only a small addition to existing knowledge. Each new learning phase triggers a sampling process that selectively replays, from the body of existing knowledge, just such samples that are similar to the new data, in contrast to replaying all of it. Complete replay is not required since AR represents the data distribution by GMMs, which are capable of selectively updating their internal representation only where data statistics have changed. As long as additions are adiabatic, the amount of to-be-replayed samples need not to depend on the amount of previously acquired knowledge at all. We verify experimentally that AR is superior to state-of-the-art deep generative replay using VAEs.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
353,566
1904.02783
OTFS-NOMA: An Efficient Approach for Exploiting Heterogenous User Mobility Profiles
This paper considers a challenging communication scenario, in which users have heterogenous mobility profiles, e.g., some users are moving at high speeds and some users are static. A new non-orthogonal multiple-access (NOMA) transmission protocol that incorporates orthogonal time frequency space (OTFS) modulation is proposed. Thereby, users with different mobility profiles are grouped together for the implementation of NOMA. The proposed OTFS-NOMA protocol is shown to be applicable to both uplink and downlink transmission, where sophisticated transmit and receive strategies are developed to remove inter-symbol interference and harvest both multi-path and multi-user diversity. Analytical results demonstrate that both the high-mobility and low-mobility users benefit from the application of OTFS-NOMA. In particular, the use of NOMA allows the spreading of the high-mobility users' signals over a large amount of time-frequency resources, which enhances the OTFS resolution and improves the detection reliability. In addition, OTFS-NOMA ensures that low-mobility users have access to bandwidth resources which in conventional OTFS-orthogonal multiple access (OTFS-NOMA) would be solely occupied by the high-mobility users. Thus, OTFS-NOMA improves the spectral efficiency and reduces latency.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
126,520
2211.11646
NeRF-RPN: A general framework for object detection in NeRFs
This paper presents the first significant object detection framework, NeRF-RPN, which directly operates on NeRF. Given a pre-trained NeRF model, NeRF-RPN aims to detect all bounding boxes of objects in a scene. By exploiting a novel voxel representation that incorporates multi-scale 3D neural volumetric features, we demonstrate it is possible to regress the 3D bounding boxes of objects in NeRF directly without rendering the NeRF at any viewpoint. NeRF-RPN is a general framework and can be applied to detect objects without class labels. We experimented NeRF-RPN with various backbone architectures, RPN head designs and loss functions. All of them can be trained in an end-to-end manner to estimate high quality 3D bounding boxes. To facilitate future research in object detection for NeRF, we built a new benchmark dataset which consists of both synthetic and real-world data with careful labeling and clean up. Code and dataset are available at https://github.com/lyclyc52/NeRF_RPN.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
331,818
2406.13828
Neuro-symbolic Training for Reasoning over Spatial Language
Spatial reasoning based on natural language expressions is essential for everyday human tasks. This reasoning ability is also crucial for machines to interact with their environment in a human-like manner. However, recent research shows that even state-of-the-art language models struggle with spatial reasoning over text, especially when facing nesting spatial expressions. This is attributed to not achieving the right level of abstraction required for generalizability. To alleviate this issue, we propose training language models with neuro-symbolic techniques that exploit the spatial logical rules as constraints, providing additional supervision to improve spatial reasoning and question answering. Training language models to adhere to spatial reasoning rules guides them in making more effective and general abstractions for transferring spatial knowledge to various domains. We evaluate our approach on existing spatial question-answering benchmarks. Our results indicate the effectiveness of our proposed technique in improving language models in complex multi-hop spatial reasoning over text.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
466,010
2208.00782
Tree edit distance for hierarchical data compatible with HMIL paradigm
We define edit distance for hierarchically structured data compatible with the hierarchical multi-instance learning paradigm. Example of such data is dataset represented in JSON format where inner Array objects are interpreted as unordered bags of elements. We prove correct analytical properties of the defined distance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
310,966
2401.00167
Leveraging Partial Symmetry for Multi-Agent Reinforcement Learning
Incorporating symmetry as an inductive bias into multi-agent reinforcement learning (MARL) has led to improvements in generalization, data efficiency, and physical consistency. While prior research has succeeded in using perfect symmetry prior, the realm of partial symmetry in the multi-agent domain remains unexplored. To fill in this gap, we introduce the partially symmetric Markov game, a new subclass of the Markov game. We then theoretically show that the performance error introduced by utilizing symmetry in MARL is bounded, implying that the symmetry prior can still be useful in MARL even in partial symmetry situations. Motivated by this insight, we propose the Partial Symmetry Exploitation (PSE) framework that is able to adaptively incorporate symmetry prior in MARL under different symmetry-breaking conditions. Specifically, by adaptively adjusting the exploitation of symmetry, our framework is able to achieve superior sample efficiency and overall performance of MARL algorithms. Extensive experiments are conducted to demonstrate the superior performance of the proposed framework over baselines. Finally, we implement the proposed framework in real-world multi-robot testbed to show its superiority.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
418,906
2203.14269
Safe Hierarchical Model Predictive Control and Planning for Autonomous Systems
Planning and control for autonomous vehicles usually are hierarchical separated. However, increasing performance demands and operating in highly dynamic environments requires an frequent re-evaluation of the planning and tight integration of control and planning to guarantee safety. We propose an integrated hierarchical predictive control and planning approach to tackle this challenge. Planner and controller are based on the repeated solution of moving horizon optimal control problems. The planner can choose different low-layer controller modes for increased flexibility and performance instead of using a single controller with a large safety margin for collision avoidance under uncertainty. Planning is based on simplified system dynamics and safety, yet flexible operation is ensured by constraint tightening based on a mixed-integer linear programming formulation. A cyclic horizon tube-based model predictive controller guarantees constraint satisfaction for different control modes and disturbances. Examples of such modes are a slow-speed movement with high precision and fast-speed movements with large uncertainty bounds. Allowing for different control modes reduces the conservatism, while the hierarchical decomposition of the problem reduces the computational cost and enables real-time implementation. We derive conditions for recursive feasibility to ensure constraint satisfaction and obstacle avoidance to guarantee safety and ensure compatibility between the layers and modes. Simulation results illustrate the efficiency and applicability of the proposed hierarchical strategy.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
287,943
1602.05183
A note and a correction on measuring cognitive distance in multiple dimensions
In a previous article (Rahman, Guns, Rousseau, and Engels, 2015) we described several approaches to determine the cognitive distance between two units. One of these approaches was based on what we called barycenters in N dimensions. The present note corrects this terminology and introduces the more adequate term 'similarity-adapted publication vectors'. Furthermore, we correct an error in normalization and explain the importance of scale invariance in determining cognitive distance. We also consider weighted cosine similarity as an alternative approach to determine cognitive (dis)similarity. Overall, we find that the three approaches (distance between barycenters, distance between similarity-adapted publication vectors, and weighted cosine similarity) yield very similar results.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
52,223
2012.13608
Synergy via Redundancy: Adaptive Replication Strategies and Fundamental Limits
The maximum possible throughput (or the rate of job completion) of a multi-server system is typically the sum of the service rates of individual servers. Recent work shows that launching multiple replicas of a job and canceling them as soon as one copy finishes can boost the throughput, especially when the service time distribution has high variability. This means that redundancy can, in fact, create synergy among servers such that their overall throughput is greater than the sum of individual servers. This work seeks to find the fundamental limit of the throughput boost achieved by job replication and the optimal replication policy to achieve it. While most previous works consider upfront replication policies, we expand the set of possible policies to delayed launch of replicas. The search for the optimal adaptive replication policy can be formulated as a Markov Decision Process, using which we propose two myopic replication policies, MaxRate and AdaRep, to adaptively replicate jobs. In order to quantify the optimality gap of these and other policies, we derive upper bounds on the service capacity, which provide fundamental limits on the throughput of queueing systems with redundancy.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
213,270
2310.03024
AstroCLIP: A Cross-Modal Foundation Model for Galaxies
We present AstroCLIP, a single, versatile model that can embed both galaxy images and spectra into a shared, physically meaningful latent space. These embeddings can then be used - without any model fine-tuning - for a variety of downstream tasks including (1) accurate in-modality and cross-modality semantic similarity search, (2) photometric redshift estimation, (3) galaxy property estimation from both images and spectra, and (4) morphology classification. Our approach to implementing AstroCLIP consists of two parts. First, we embed galaxy images and spectra separately by pretraining separate transformer-based image and spectrum encoders in self-supervised settings. We then align the encoders using a contrastive loss. We apply our method to spectra from the Dark Energy Spectroscopic Instrument and images from its corresponding Legacy Imaging Survey. Overall, we find remarkable performance on all downstream tasks, even relative to supervised baselines. For example, for a task like photometric redshift prediction, we find similar performance to a specifically-trained ResNet18, and for additional tasks like physical property estimation (stellar mass, age, metallicity, and sSFR), we beat this supervised baseline by 19\% in terms of $R^2$. We also compare our results to a state-of-the-art self-supervised single-modal model for galaxy images, and find that our approach outperforms this benchmark by roughly a factor of two on photometric redshift estimation and physical property prediction in terms of $R^2$, while remaining roughly in-line in terms of morphology classification. Ultimately, our approach represents the first cross-modal self-supervised model for galaxies, and the first self-supervised transformer-based architectures for galaxy images and spectra.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
397,101
2406.16905
Optimising Random Forest Machine Learning Algorithms for User VR Experience Prediction Based on Iterative Local Search-Sparrow Search Algorithm
In this paper, an improved method for VR user experience prediction is investigated by introducing a sparrow search algorithm and a random forest algorithm improved by an iterative local search-optimised sparrow search algorithm. The study firstly conducted a statistical analysis of the data, and then trained and tested using the traditional random forest model, the random forest model improved by the sparrow search algorithm, and the random forest algorithm improved based on the iterative local search-sparrow search algorithm, respectively. The results show that the traditional random forest model has a prediction accuracy of 93% on the training set but only 73.3% on the test set, which is poor in generalisation; whereas the model improved by the sparrow search algorithm has a prediction accuracy of 94% on the test set, which is improved compared with the traditional model. What is more noteworthy is that the improved model based on the iterative local search-sparrow search algorithm achieves 100% accuracy on both the training and test sets, which is significantly better than the other two methods. These research results provide new ideas and methods for VR user experience prediction, especially the improved model based on the iterative local search-sparrow search algorithm performs well and is able to more accurately predict and classify the user's VR experience. In the future, the application of this method in other fields can be further explored, and its effectiveness can be verified through real cases to promote the development of AI technology in the field of user experience.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
467,334
1902.08934
Privacy Preserving Location Data Publishing: A Machine Learning Approach
Publishing datasets plays an essential role in open data research and promoting transparency of government agencies. However, such data publication might reveal users' private information. One of the most sensitive sources of data is spatiotemporal trajectory datasets. Unfortunately, merely removing unique identifiers cannot preserve the privacy of users. Adversaries may know parts of the trajectories or be able to link the published dataset to other sources for the purpose of user identification. Therefore, it is crucial to apply privacy preserving techniques before the publication of spatiotemporal trajectory datasets. In this paper, we propose a robust framework for the anonymization of spatiotemporal trajectory datasets termed as machine learning based anonymization (MLA). By introducing a new formulation of the problem, we are able to apply machine learning algorithms for clustering the trajectories and propose to use $k$-means algorithm for this purpose. A variation of $k$-means algorithm is also proposed to preserve the privacy in overly sensitive datasets. Moreover, we improve the alignment process by considering multiple sequence alignment as part of the MLA. The framework and all the proposed algorithms are applied to TDrive and Geolife location datasets. The experimental results indicate a significantly higher utility of datasets by anonymization based on MLA framework.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
122,300
2211.00599
UNFIS: A Novel Neuro-Fuzzy Inference System with Unstructured Fuzzy Rules for Classification
An important constraint of Fuzzy Inference Systems (FIS) is their structured rules defined based on evaluating all input variables. Indeed, the length of all fuzzy rules and the number of input variables are equal. However, in many decision-making problems evaluating some conditions on a limited set of input variables is sufficient to decide properly (unstructured rules). Therefore, this constraint limits the performance, generalization, and interpretability of the FIS. To address this issue, this paper presents a neuro-fuzzy inference system for classification applications that can select different sets of input variables for constructing each fuzzy rule. To realize this capability, a new fuzzy selector neuron with an adaptive parameter is proposed that can select input variables in the antecedent part of each fuzzy rule. Moreover, in this paper, the consequent part of the Takagi-Sugeno-Kang FIS is also changed properly to consider only the selected set of input variables. To learn the parameters of the proposed architecture, a trust-region-based learning method (General quasi-Levenberg-Marquardt (GqLM)) is proposed to minimize cross-entropy in multiclass problems. The performance of the proposed method is compared with some related previous approaches in some real-world classification problems. Based on these comparisons the proposed method has better or very close performance with a parsimonious structure consisting of unstructured fuzzy.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
327,930
1507.05398
Generating Binary Optimal Codes Using Heterogeneous Parallel Computing
Generation of optimal codes is a well known problem in coding theory. Many computational approaches exist in the literature for finding record breaking codes. However generating codes with long lengths $n$ using serial algorithms is computationally very expensive, for example the worst case time complexity of a Greedy algorithm is $\mathcal{O}(n\; 4^n)$. In order to improve the efficiency of generating codes with long lengths, we propose and investigate some parallel algorithms using General Purpose Graphic Processing Units (GPGPU). This paper considers the implementation of parallel Greedy algorithm using GPGPU-CUDA (Computed Unified Device Architecture) framework and discusses various optimization techniques to accelerate the GPU code. The performance achieved for optimized parallel implementations is more than two to three orders of magnitude faster than that of serial implementation and shows a great potential of GPGPU in the field of coding theory applications.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
45,285
1911.03711
Unsupervised adulterated red-chili pepper content transformation for hyperspectral classification
Preserving red-chili quality is of utmost importance in which the authorities demand the quality techniques to detect, classify and prevent it from the impurities. For example, salt, wheat flour, wheat bran, and rice bran contamination in grounded red chili, which typically a food, are a serious threat to people who are allergic to such items. This work presents the feasibility of utilizing visible and near-infrared (VNIR) hyperspectral imaging (HSI) to detect and classify the aforementioned adulterants in red chili. However, adulterated red chili data annotation is a big challenge for classification because the acquisition of labeled data for real-time supervised learning is expensive in terms of cost and time. Therefore, this study, for the very first time proposes a novel approach to annotate the red chili samples using a clustering mechanism at 500~nm wavelength spectral response due to its dark appearance at a specified wavelength. Later the spectral samples are classified into pure or adulterated using one-class SVM. The classification performance achieves 99% in case of pure adulterants or red chili whereas 85% for adulterated samples. We further investigate that the single classification model is enough to detect any foreign substance in red chili pepper rather than cascading multiple PLS regression models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
152,730
cs/0612027
Experimental Information and Statistical Modeling of Physical Laws
Statistical modeling of physical laws connects experiments with mathematical descriptions of natural phenomena. The modeling is based on the probability density of measured variables expressed by experimental data via a kernel estimator. As an objective kernel the scattering function determined by calibration of the instrument is introduced. This function provides for a new definition of experimental information and redundancy of experimentation in terms of information entropy. The redundancy increases with the number of experiments, while the experimental information converges to a value that describes the complexity of the data. The difference between the redundancy and the experimental information is proposed as the model cost function. From its minimum, a proper number of data in the model is estimated. As an optimal, nonparametric estimator of the relation between measured variables the conditional average extracted from the kernel estimator is proposed. The modeling is demonstrated on noisy chaotic data.
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
539,941
2102.07617
On the Philosophical, Cognitive and Mathematical Foundations of Symbiotic Autonomous Systems (SAS)
Symbiotic Autonomous Systems (SAS) are advanced intelligent and cognitive systems exhibiting autonomous collective intelligence enabled by coherent symbiosis of human-machine interactions in hybrid societies. Basic research in the emerging field of SAS has triggered advanced general AI technologies functioning without human intervention or hybrid symbiotic systems synergizing humans and intelligent machines into coherent cognitive systems. This work presents a theoretical framework of SAS underpinned by the latest advances in intelligence, cognition, computer, and system sciences. SAS are characterized by the composition of autonomous and symbiotic systems that adopt bio-brain-social-inspired and heterogeneously synergized structures and autonomous behaviors. This paper explores their cognitive and mathematical foundations. The challenge to seamless human-machine interactions in a hybrid environment is addressed. SAS-based collective intelligence is explored in order to augment human capability by autonomous machine intelligence towards the next generation of general AI, autonomous computers, and trustworthy mission-critical intelligent systems. Emerging paradigms and engineering applications of SAS are elaborated via an autonomous knowledge learning system that symbiotically works between humans and cognitive robots.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
220,160
2102.02078
Multi-UAV Mobile Edge Computing and Path Planning Platform based on Reinforcement Learning
Unmanned Aerial vehicles (UAVs) are widely used as network processors in mobile networks, but more recently, UAVs have been used in Mobile Edge Computing as mobile servers. However, there are significant challenges to use UAVs in complex environments with obstacles and cooperation between UAVs. We introduce a new multi-UAV Mobile Edge Computing platform, which aims to provide better Quality-of-Service and path planning based on reinforcement learning to address these issues. The contributions of our work include: 1) optimizing the quality of service for mobile edge computing and path planning in the same reinforcement learning framework; 2) using a sigmoid-like function to depict the terminal users' demand to ensure a higher quality of service; 3) applying synthetic considerations of the terminal users' demand, risk and geometric distance in reinforcement learning reward matrix to ensure the quality of service, risk avoidance, and the cost-savings. Simulations have shown the effectiveness and feasibility of our platform, which can help advance related researches.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
218,315
1310.0305
Filtering for More Accurate Dense Tissue Segmentation in Digitized Mammograms
Breast tissue segmentation into dense and fat tissue is important for determining the breast density in mammograms. Knowing the breast density is important both in diagnostic and computer-aided detection applications. There are many different ways to express the density of a breast and good quality segmentation should provide the possibility to perform accurate classification no matter which classification rule is being used. Knowing the right breast density and having the knowledge of changes in the breast density could give a hint of a process which started to happen within a patient. Mammograms generally suffer from a problem of different tissue overlapping which results in the possibility of inaccurate detection of tissue types. Fibroglandular tissue presents rather high attenuation of X-rays and is visible as brighter in the resulting image but overlapping fibrous tissue and blood vessels could easily be replaced with fibroglandular tissue in automatic segmentation algorithms. Small blood vessels and microcalcifications are also shown as bright objects with similar intensities as dense tissue but do have some properties which makes possible to suppress them from the final results. In this paper we try to divide dense and fat tissue by suppressing the scattered structures which do not represent glandular or dense tissue in order to divide mammograms more accurately in the two major tissue types. For suppressing blood vessels and microcalcifications we have used Gabor filters of different size and orientation and a combination of morphological operations on filtered image with enhanced contrast.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
27,470
1801.10443
Counting Cells in Time-Lapse Microscopy using Deep Neural Networks
An automatic approach to counting any kind of cells could alleviate work of the experts and boost the research in fields such as regenerative medicine. In this paper, a method for microscopy cell counting using multiple frames (hence temporal information) is proposed. Unlike previous approaches where the cell counting is done independently in each frame (static cell counting), in this work the cell counting prediction is done using multiple frames (dynamic cell counting). A spatiotemporal model using ConvNets and long short term memory (LSTM) recurrent neural networks is proposed to overcome temporal variations. The model outperforms static cell counting in a publicly available dataset of stem cells. The advantages, working conditions and limitations of the ConvNet-LSTM method are discussed. Although our method is tested in cell counting, it can be extrapolated to quantify in video (or correlated image series) any kind of objects or volumes.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
89,293
2402.10487
RPMixer: Shaking Up Time Series Forecasting with Random Projections for Large Spatial-Temporal Data
Spatial-temporal forecasting systems play a crucial role in addressing numerous real-world challenges. In this paper, we investigate the potential of addressing spatial-temporal forecasting problems using general time series forecasting models, i.e., models that do not leverage the spatial relationships among the nodes. We propose a all-Multi-Layer Perceptron (all-MLP) time series forecasting architecture called RPMixer. The all-MLP architecture was chosen due to its recent success in time series forecasting benchmarks. Furthermore, our method capitalizes on the ensemble-like behavior of deep neural networks, where each individual block within the network behaves like a base learner in an ensemble model, particularly when identity mapping residual connections are incorporated. By integrating random projection layers into our model, we increase the diversity among the blocks' outputs, thereby improving the overall performance of the network. Extensive experiments conducted on the largest spatial-temporal forecasting benchmark datasets demonstrate that the proposed method outperforms alternative methods, including both spatial-temporal graph models and general forecasting models.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
429,988
1904.12597
Region homogeneity in the Logarithmic Image Processing framework: application to region growing algorithms
In order to create an image segmentation method robust to lighting changes, two novel homogeneity criteria of an image region were studied. Both were defined using the Logarithmic Image Processing (LIP) framework whose laws model lighting changes. The first criterion estimates the LIP-additive homogeneity and is based on the LIP-additive law. It is theoretically insensitive to lighting changes caused by variations of the camera exposure-time or source intensity. The second, the LIP-multiplicative homogeneity criterion, is based on the LIP-multiplicative law and is insensitive to changes due to variations of the object thickness or opacity. Each criterion is then applied in Revol and Jourlin's (1997) region growing method which is based on the homogeneity of an image region. The region growing method becomes therefore robust to the lighting changes specific to each criterion. Experiments on simulated and on real images presenting lighting variations prove the robustness of the criteria to those variations. Compared to a state-of the art method based on the image component-tree, ours is more robust. These results open the way to numerous applications where the lighting is uncontrolled or partially controlled.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
129,152
1312.3304
The Effect of Eavesdropper's Statistics in Experimental Wireless Secret-Key Generation
This paper investigates the role of the eavesdropper's statistics in the implementation of a practical secret-key generation system. We carefully conduct the information-theoretic analysis of a secret-key generation system from wireless channel gains measured with software-defined radios. In particular, we show that it is inaccurate to assume that the eavesdropper gets no information because of decorrelation with distance. We also provide a bound for the achievable secret-key rate in the finite key-length regime that takes into account the presence of correlated eavesdropper's observations. We evaluate this bound with our experimental gain measurements to show that operating with a finite number of samples incurs a loss in secret-key rate on the order of 20%.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
29,031
2502.01220
Language Models Struggle to Achieve a Consistent Temporal Representation of Facts
Language Models (LMs) have shown substantial improvements in handling factual knowledge, yet their capability to consistently represent temporal facts, which are valid only within specific timeframes, remains underexplored. To investigate this, we introduce TimeStress, a novel dataset comprising 521K statements on 2003 of the most popular temporal facts in Wikidata. Each statement contextualizes a fact with correct and incorrect dates across three precisions (Day, Month, Year). This setup allows us to evaluate LMs' ability to discern between correct and incorrect temporal statements based on their probability of being generated. We assess 18 LMs across various architectures using two metrics: the win rate, indicating how often correct dates outperform incorrect ones, and robustness, reflecting consistent performance across all dates. Our findings reveal that while some LMs achieve a win rate exceeding 80\%, robustness remains low, with the best model achieving only 6\%. Furthermore, robust knowledge at one date precision does not reliably transfer to others, highlighting a significant generalization gap. These results underscore the struggle of LMs to maintain a consistent temporal representation, supporting their limitations as reliable sources of temporal knowledge. We provide all data and code for further research.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
529,757
2009.08438
Competitiveness of MAP-Elites against Proximal Policy Optimization on locomotion tasks in deterministic simulations
The increasing importance of robots and automation creates a demand for learnable controllers which can be obtained through various approaches such as Evolutionary Algorithms (EAs) or Reinforcement Learning (RL). Unfortunately, these two families of algorithms have mainly developed independently and there are only a few works comparing modern EAs with deep RL algorithms. We show that Multidimensional Archive of Phenotypic Elites (MAP-Elites), which is a modern EA, can deliver better-performing solutions than one of the state-of-the-art RL methods, Proximal Policy Optimization (PPO) in the generation of locomotion controllers for a simulated hexapod robot. Additionally, extensive hyper-parameter tuning shows that MAP-Elites displays greater robustness across seeds and hyper-parameter sets. Generally, this paper demonstrates that EAs combined with modern computational resources display promising characteristics and have the potential to contribute to the state-of-the-art in controller learning.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
196,239
2302.00162
Continual Segment: Towards a Single, Unified and Accessible Continual Segmentation Model of 143 Whole-body Organs in CT Scans
Deep learning empowers the mainstream medical image segmentation methods. Nevertheless current deep segmentation approaches are not capable of efficiently and effectively adapting and updating the trained models when new incremental segmentation classes (along with new training datasets or not) are required to be added. In real clinical environment, it can be preferred that segmentation models could be dynamically extended to segment new organs/tumors without the (re-)access to previous training datasets due to obstacles of patient privacy and data storage. This process can be viewed as a continual semantic segmentation (CSS) problem, being understudied for multi-organ segmentation. In this work, we propose a new architectural CSS learning framework to learn a single deep segmentation model for segmenting a total of 143 whole-body organs. Using the encoder/decoder network structure, we demonstrate that a continually-trained then frozen encoder coupled with incrementally-added decoders can extract and preserve sufficiently representative image features for new classes to be subsequently and validly segmented. To maintain a single network model complexity, we trim each decoder progressively using neural architecture search and teacher-student based knowledge distillation. To incorporate with both healthy and pathological organs appearing in different datasets, a novel anomaly-aware and confidence learning module is proposed to merge the overlapped organ predictions, originated from different decoders. Trained and validated on 3D CT scans of 2500+ patients from four datasets, our single network can segment total 143 whole-body organs with very high accuracy, closely reaching the upper bound performance level by training four separate segmentation models (i.e., one model per dataset/task).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
343,120
1812.02903
Applied Federated Learning: Improving Google Keyboard Query Suggestions
Federated learning is a distributed form of machine learning where both the training data and model training are decentralized. In this paper, we use federated learning in a commercial, global-scale setting to train, evaluate and deploy a model to improve virtual keyboard search suggestion quality without direct access to the underlying user data. We describe our observations in federated training, compare metrics to live deployments, and present resulting quality increases. In whole, we demonstrate how federated learning can be applied end-to-end to both improve user experiences and enhance user privacy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
115,888
2305.09746
A Range-Null Space Decomposition Approach for Fast and Flexible Spectral Compressive Imaging
We present RND-SCI, a novel framework for compressive hyperspectral image (HSI) reconstruction. Our framework decomposes the reconstructed object into range-space and null-space components, where the range-space part ensures the solution conforms to the compression process, and the null-space term introduces a deep HSI prior to constraining the output to have satisfactory properties. RND-SCI is not only simple in design with strong interpretability but also can be easily adapted to various HSI reconstruction networks, improving the quality of HSIs with minimal computational overhead. RND-SCI significantly boosts the performance of HSI reconstruction networks in retraining, fine-tuning or plugging into a pre-trained off-the-shelf model. Based on the framework and SAUNet, we design an extremely fast HSI reconstruction network, RND-SAUNet, which achieves an astounding 91 frames per second while maintaining superior reconstruction accuracy compared to other less time-consuming methods. Code and models are available at https://github.com/hustvl/RND-SCI.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
364,753
2205.02841
Understanding Transfer Learning for Chest Radiograph Clinical Report Generation with Modified Transformer Architectures
The image captioning task is increasingly prevalent in artificial intelligence applications for medicine. One important application is clinical report generation from chest radiographs. The clinical writing of unstructured reports is time consuming and error-prone. An automated system would improve standardization, error reduction, time consumption, and medical accessibility. In this paper we demonstrate the importance of domain specific pre-training and propose a modified transformer architecture for the medical image captioning task. To accomplish this, we train a series of modified transformers to generate clinical reports from chest radiograph image input. These modified transformers include: a meshed-memory augmented transformer architecture with visual extractor using ImageNet pre-trained weights, a meshed-memory augmented transformer architecture with visual extractor using CheXpert pre-trained weights, and a meshed-memory augmented transformer whose encoder is passed the concatenated embeddings using both ImageNet pre-trained weights and CheXpert pre-trained weights. We use BLEU(1-4), ROUGE-L, CIDEr, and the clinical CheXbert F1 scores to validate our models and demonstrate competitive scores with state of the art models. We provide evidence that ImageNet pre-training is ill-suited for the medical image captioning task, especially for less frequent conditions (eg: enlarged cardiomediastinum, lung lesion, pneumothorax). Furthermore, we demonstrate that the double feature model improves performance for specific medical conditions (edema, consolidation, pneumothorax, support devices) and overall CheXbert F1 score, and should be further developed in future work. Such a double feature model, including both ImageNet pre-training as well as domain specific pre-training, could be used in a wide range of image captioning models in medicine.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
295,076
2105.07264
Neural Trees for Learning on Graphs
Graph Neural Networks (GNNs) have emerged as a flexible and powerful approach for learning over graphs. Despite this success, existing GNNs are constrained by their local message-passing architecture and are provably limited in their expressive power. In this work, we propose a new GNN architecture -- the Neural Tree. The neural tree architecture does not perform message passing on the input graph, but on a tree-structured graph, called the H-tree, that is constructed from the input graph. Nodes in the H-tree correspond to subgraphs in the input graph, and they are reorganized in a hierarchical manner such that the parent of a node in the H-tree always corresponds to a larger subgraph in the input graph. We show that the neural tree architecture can approximate any smooth probability distribution function over an undirected graph. We also prove that the number of parameters needed to achieve an $\epsilon$-approximation of the distribution function is exponential in the treewidth of the input graph, but linear in its size. We prove that any continuous $\mathcal{G}$-invariant/equivariant function can be approximated by a nonlinear combination of such probability distribution functions over $\mathcal{G}$. We apply the neural tree to semi-supervised node classification in 3D scene graphs, and show that these theoretical properties translate into significant gains in prediction accuracy, over the more traditional GNN architectures. We also show the applicability of the neural tree architecture to citation networks with large treewidth, by using a graph sub-sampling technique.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
235,378
2409.02628
(Implicit) Ensembles of Ensembles: Epistemic Uncertainty Collapse in Large Models
Epistemic uncertainty is crucial for safety-critical applications and out-of-distribution detection tasks. Yet, we uncover a paradoxical phenomenon in deep learning models: an epistemic uncertainty collapse as model complexity increases, challenging the assumption that larger models invariably offer better uncertainty quantification. We propose that this stems from implicit ensembling within large models. To support this hypothesis, we demonstrate epistemic uncertainty collapse empirically across various architectures, from explicit ensembles of ensembles and simple MLPs to state-of-the-art vision models, including ResNets and Vision Transformers -- for the latter, we examine implicit ensemble extraction and decompose larger models into diverse sub-models, recovering epistemic uncertainty. We provide theoretical justification for these phenomena and explore their implications for uncertainty estimation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
485,770
math/0311319
Modular and p-adic cyclic codes
This paper presents some basic theorems giving the structure of cyclic codes of length n over the ring of integers modulo p^a and over the p-adic numbers, where p is a prime not dividing n. An especially interesting example is the 2-adic cyclic code of length 7 with generator polynomial X^3 + lambda X^2 + (lambda - 1) X - 1, where lambda satisfies lambda^2 - lambda + 2 =0. This is the 2-adic generalization of both the binary Hamming code and the quaternary octacode (the latter being equivalent to the Nordstrom-Robinson code). Other examples include the 2-adic Golay code of length 24 and the 3-adic Golay code of length 12.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
540,679
2307.02730
Fine-grained Action Analysis: A Multi-modality and Multi-task Dataset of Figure Skating
The fine-grained action analysis of the existing action datasets is challenged by insufficient action categories, low fine granularities, limited modalities, and tasks. In this paper, we propose a Multi-modality and Multi-task dataset of Figure Skating (MMFS) which was collected from the World Figure Skating Championships. MMFS, which possesses action recognition and action quality assessment, captures RGB, skeleton, and is collected the score of actions from 11671 clips with 256 categories including spatial and temporal labels. The key contributions of our dataset fall into three aspects as follows. (1) Independently spatial and temporal categories are first proposed to further explore fine-grained action recognition and quality assessment. (2) MMFS first introduces the skeleton modality for complex fine-grained action quality assessment. (3) Our multi-modality and multi-task dataset encourage more action analysis models. To benchmark our dataset, we adopt RGB-based and skeleton-based baseline methods for action recognition and action quality assessment.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
377,789
1606.07849
Focused Meeting Summarization via Unsupervised Relation Extraction
We present a novel unsupervised framework for focused meeting summarization that views the problem as an instance of relation extraction. We adapt an existing in-domain relation learner (Chen et al., 2011) by exploiting a set of task-specific constraints and features. We evaluate the approach on a decision summarization task and show that it outperforms unsupervised utterance-level extractive summarization baselines as well as an existing generic relation-extraction-based summarization method. Moreover, our approach produces summaries competitive with those generated by supervised methods in terms of the standard ROUGE score.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
57,792
1109.3940
Learning Discriminative Metrics via Generative Models and Kernel Learning
Metrics specifying distances between data points can be learned in a discriminative manner or from generative models. In this paper, we show how to unify generative and discriminative learning of metrics via a kernel learning framework. Specifically, we learn local metrics optimized from parametric generative models. These are then used as base kernels to construct a global kernel that minimizes a discriminative training criterion. We consider both linear and nonlinear combinations of local metric kernels. Our empirical results show that these combinations significantly improve performance on classification tasks. The proposed learning algorithm is also very efficient, achieving order of magnitude speedup in training time compared to previous discriminative baseline methods.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
12,226
2209.13853
Thinking Hallucination for Video Captioning
With the advent of rich visual representations and pre-trained language models, video captioning has seen continuous improvement over time. Despite the performance improvement, video captioning models are prone to hallucination. Hallucination refers to the generation of highly pathological descriptions that are detached from the source material. In video captioning, there are two kinds of hallucination: object and action hallucination. Instead of endeavoring to learn better representations of a video, in this work, we investigate the fundamental sources of the hallucination problem. We identify three main factors: (i) inadequate visual features extracted from pre-trained models, (ii) improper influences of source and target contexts during multi-modal fusion, and (iii) exposure bias in the training strategy. To alleviate these problems, we propose two robust solutions: (a) the introduction of auxiliary heads trained in multi-label settings on top of the extracted visual features and (b) the addition of context gates, which dynamically select the features during fusion. The standard evaluation metrics for video captioning measures similarity with ground truth captions and do not adequately capture object and action relevance. To this end, we propose a new metric, COAHA (caption object and action hallucination assessment), which assesses the degree of hallucination. Our method achieves state-of-the-art performance on the MSR-Video to Text (MSR-VTT) and the Microsoft Research Video Description Corpus (MSVD) datasets, especially by a massive margin in CIDEr score.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
320,054
2207.07693
Towards Understanding Confusion and Affective States Under Communication Failures in Voice-Based Human-Machine Interaction
We present a series of two studies conducted to understand user's affective states during voice-based human-machine interactions. Emphasis is placed on the cases of communication errors or failures. In particular, we are interested in understanding "confusion" in relation with other affective states. The studies consist of two types of tasks: (1) related to communication with a voice-based virtual agent: speaking to the machine and understanding what the machine says, (2) non-communication related, problem-solving tasks where the participants solve puzzles and riddles but are asked to verbally explain the answers to the machine. We collected audio-visual data and self-reports of affective states of the participants. We report results of two studies and analysis of the collected data. The first study was analyzed based on the annotator's observation, and the second study was analyzed based on the self-report.
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
308,271
2310.08259
Invisible Threats: Backdoor Attack in OCR Systems
Optical Character Recognition (OCR) is a widely used tool to extract text from scanned documents. Today, the state-of-the-art is achieved by exploiting deep neural networks. However, the cost of this performance is paid at the price of system vulnerability. For instance, in backdoor attacks, attackers compromise the training phase by inserting a backdoor in the victim's model that will be activated at testing time by specific patterns while leaving the overall model performance intact. This work proposes a backdoor attack for OCR resulting in the injection of non-readable characters from malicious input images. This simple but effective attack exposes the state-of-the-art OCR weakness, making the extracted text correct to human eyes but simultaneously unusable for the NLP application that uses OCR as a preprocessing step. Experimental results show that the attacked models successfully output non-readable characters for around 90% of the poisoned instances without harming their performance for the remaining instances.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
399,326
2111.10974
Many Heads but One Brain: Fusion Brain -- a Competition and a Single Multimodal Multitask Architecture
Supporting the current trend in the AI community, we present the AI Journey 2021 Challenge called Fusion Brain, the first competition which is targeted to make the universal architecture which could process different modalities (in this case, images, texts, and code) and solve multiple tasks for vision and language. The Fusion Brain Challenge combines the following specific tasks: Code2code Translation, Handwritten Text recognition, Zero-shot Object Detection, and Visual Question Answering. We have created datasets for each task to test the participants' submissions on it. Moreover, we have collected and made publicly available a new handwritten dataset in both English and Russian, which consists of 94,128 pairs of images and texts. We also propose a multimodal and multitask architecture - a baseline solution, in the center of which is a frozen foundation model and which has been trained in Fusion mode along with Single-task mode. The proposed Fusion approach proves to be competitive and more energy-efficient compared to the task-specific one.
false
false
false
false
true
false
false
false
true
false
false
true
false
false
false
false
false
false
267,510
2001.11499
Estimating and abstracting the 3D structure of bones using neural networks on X-ray (2D) images
In this paper, we present a deep-learning based method for estimating the 3D structure of a bone from a pair of 2D X-ray images. Our triplet loss-trained neural network selects the most closely matching 3D bone shape from a predefined set of shapes. Our predictions have an average root mean square (RMS) distance of 1.08 mm between the predicted and true shapes, making it more accurate than the average error achieved by eight other examined 3D bone reconstruction approaches. The prediction process that we use is fully automated and unlike many competing approaches, it does not rely on any previous knowledge about bone geometry. Additionally, our neural network can determine the identity of a bone based only on its X-ray image. It computes a low-dimensional representation ("embedding") of each 2D X-ray image and henceforth compares different X-ray images based only on their embeddings. An embedding holds enough information to uniquely identify the bone CT belonging to the input X-ray image with a 100% accuracy and can therefore serve as a kind of fingerprint for that bone. Possible applications include faster, image content-based bone database searches for forensic purposes.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
162,090
2112.07384
Identification of Biased Terms in News Articles by Comparison of Outlet-specific Word Embeddings
Slanted news coverage, also called media bias, can heavily influence how news consumers interpret and react to the news. To automatically identify biased language, we present an exploratory approach that compares the context of related words. We train two word embedding models, one on texts of left-wing, the other on right-wing news outlets. Our hypothesis is that a word's representations in both word embedding spaces are more similar for non-biased words than biased words. The underlying idea is that the context of biased words in different news outlets varies more strongly than the one of non-biased words, since the perception of a word as being biased differs depending on its context. While we do not find statistical significance to accept the hypothesis, the results show the effectiveness of the approach. For example, after a linear mapping of both word embeddings spaces, 31% of the words with the largest distances potentially induce bias. To improve the results, we find that the dataset needs to be significantly larger, and we derive further methodology as future research direction. To our knowledge, this paper presents the first in-depth look at the context of bias words measured by word embeddings.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
271,465
2407.14377
Enhancing Cloud-Native Resource Allocation with Probabilistic Forecasting Techniques in O-RAN
The need for intelligent and efficient resource provisioning for the productive management of resources in real-world scenarios is growing with the evolution of telecommunications towards the 6G era. Technologies such as Open Radio Access Network (O-RAN) can help to build interoperable solutions for the management of complex systems. Probabilistic forecasting, in contrast to deterministic single-point estimators, can offer a different approach to resource allocation by quantifying the uncertainty of the generated predictions. This paper examines the cloud-native aspects of O-RAN together with the radio App (rApp) deployment options. The integration of probabilistic forecasting techniques as a rApp in O-RAN is also emphasized, along with case studies of real-world applications. Through a comparative analysis of forecasting models using the error metric, we show the advantages of Deep Autoregressive Recurrent network (DeepAR) over other deterministic probabilistic estimators. Furthermore, the simplicity of Simple-Feed-Forward (SFF) leads to a fast runtime but does not capture the temporal dependencies of the input data. Finally, we present some aspects related to the practical applicability of cloud-native O-RAN with probabilistic forecasting.
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
false
true
474,762
1206.3793
A distributed classification/estimation algorithm for sensor networks
In this paper, we address the problem of simultaneous classification and estimation of hidden parameters in a sensor network with communications constraints. In particular, we consider a network of noisy sensors which measure a common scalar unknown parameter. We assume that a fraction of the nodes represent faulty sensors, whose measurements are poorly reliable. The goal for each node is to simultaneously identify its class (faulty or non-faulty) and estimate the common parameter. We propose a novel cooperative iterative algorithm which copes with the communication constraints imposed by the network and shows remarkable performance. Our main result is a rigorous proof of the convergence of the algorithm and a characterization of the limit behavior. We also show that, in the limit when the number of sensors goes to infinity, the common unknown parameter is estimated with arbitrary small error, while the classification error converges to that of the optimal centralized maximum likelihood estimator. We also show numerical results that validate the theoretical analysis and support their possible generalization. We compare our strategy with the Expectation-Maximization algorithm and we discuss trade-offs in terms of robustness, speed of convergence and implementation simplicity.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
16,595
1511.08971
Evolving Models for Meso-Scale Structures
Real world complex networks are scale free and possess meso-scale properties like core-periphery and community structure. We study evolution of the core over time in real world networks. This paper proposes evolving models for both unweighted and weighted scale free networks having local and global core-periphery as well as community structure. Network evolves using topological growth, self growth, and weight distribution function. To validate the correctness of proposed models, we use K-shell and S-shell decomposition methods. Simulation results show that the generated unweighted networks follow power law degree distribution with droop head and heavy tail. Similarly, generated weighted networks follow degree, strength, and edge-weight power law distributions. We further study other properties of complex networks, such as clustering coefficient, nearest neighbor degree, and strength degree correlation.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
49,604
2009.09918
Beyond Identity: What Information Is Stored in Biometric Face Templates?
Deeply-learned face representations enable the success of current face recognition systems. Despite the ability of these representations to encode the identity of an individual, recent works have shown that more information is stored within, such as demographics, image characteristics, and social traits. This threatens the user's privacy, since for many applications these templates are expected to be solely used for recognition purposes. Knowing the encoded information in face templates helps to develop bias-mitigating and privacy-preserving face recognition technologies. This work aims to support the development of these two branches by analysing face templates regarding 113 attributes. Experiments were conducted on two publicly available face embeddings. For evaluating the predictability of the attributes, we trained a massive attribute classifier that is additionally able to accurately state its prediction confidence. This allows us to make more sophisticated statements about the attribute predictability. The results demonstrate that up to 74 attributes can be accurately predicted from face templates. Especially non-permanent attributes, such as age, hairstyles, haircolors, beards, and various accessories, found to be easily-predictable. Since face recognition systems aim to be robust against these variations, future research might build on this work to develop more understandable privacy preserving solutions and build robust and fair face templates.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
196,732
1709.00541
Patterns versus Characters in Subword-aware Neural Language Modeling
Words in some natural languages can have a composite structure. Elements of this structure include the root (that could also be composite), prefixes and suffixes with which various nuances and relations to other words can be expressed. Thus, in order to build a proper word representation one must take into account its internal structure. From a corpus of texts we extract a set of frequent subwords and from the latter set we select patterns, i.e. subwords which encapsulate information on character $n$-gram regularities. The selection is made using the pattern-based Conditional Random Field model with $l_1$ regularization. Further, for every word we construct a new sequence over an alphabet of patterns. The new alphabet's symbols confine a local statistical context stronger than the characters, therefore they allow better representations in ${\mathbb{R}}^n$ and are better building blocks for word representation. In the task of subword-aware language modeling, pattern-based models outperform character-based analogues by 2-20 perplexity points. Also, a recurrent neural network in which a word is represented as a sum of embeddings of its patterns is on par with a competitive and significantly more sophisticated character-based convolutional architecture.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
79,921
2005.03531
Faceted Search of Heterogeneous Geographic Information for Dynamic Map Projection
This paper proposes a faceted information exploration model that supports coarse-grained and fine-grained focusing of geographic maps by offering a graphical representation of data attributes within interactive widgets. The proposed approach enables (i) a multi-category projection of long-lasting geographic maps, based on the proposal of efficient facets for data exploration in sparse and noisy datasets, and (ii) an interactive representation of the search context based on widgets that support data visualization, faceted exploration, category-based information hiding and transparency of results at the same time. The integration of our model with a semantic representation of geographical knowledge supports the exploration of information retrieved from heterogeneous data sources, such as Public Open Data and OpenStreetMap. We evaluated our model with users in the OnToMap collaborative Web GIS. The experimental results show that, when working on geographic maps populated with multiple data categories, it outperforms simple category-based map projection and traditional faceted search tools, such as checkboxes, in both user performance and experience.
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
176,191
2208.11247
SwinFIR: Revisiting the SwinIR with Fast Fourier Convolution and Improved Training for Image Super-Resolution
Transformer-based methods have achieved impressive image restoration performance due to their capacities to model long-range dependency compared to CNN-based methods. However, advances like SwinIR adopts the window-based and local attention strategy to balance the performance and computational overhead, which restricts employing large receptive fields to capture global information and establish long dependencies in the early layers. To further improve the efficiency of capturing global information, in this work, we propose SwinFIR to extend SwinIR by replacing Fast Fourier Convolution (FFC) components, which have the image-wide receptive field. We also revisit other advanced techniques, i.e, data augmentation, pre-training, and feature ensemble to improve the effect of image reconstruction. And our feature ensemble method enables the performance of the model to be considerably enhanced without increasing the training and testing time. We applied our algorithm on multiple popular large-scale benchmarks and achieved state-of-the-art performance comparing to the existing methods. For example, our SwinFIR achieves the PSNR of 32.83 dB on Manga109 dataset, which is 0.8 dB higher than the state-of-the-art SwinIR method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
314,354
2201.06409
Search and Score-based Waterfall Auction Optimization
Online advertising is a major source of income for many online companies. One common approach is to sell online advertisements via waterfall auctions, through which a publisher makes sequential price offers to ad networks. The publisher controls the order and prices of the waterfall in an attempt to maximize his revenue. In this work, we propose a methodology to learn a waterfall strategy from historical data by wisely searching in the space of possible waterfalls and selecting the one leading to the highest revenues. The contribution of this work is twofold; First, we propose a novel method to estimate the valuation distribution of each user, with respect to each ad network. Second, we utilize the valuation matrix to score our candidate waterfalls as part of a procedure that iteratively searches in local neighborhoods. Our framework guarantees that the waterfall revenue improves between iterations ultimately converging into a local optimum. Real-world demonstrations are provided to show that the proposed method improves the total revenue of real-world waterfalls, as compared to manual expert optimization. Finally, the code and the data are available here.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
275,719
2211.07997
Security Closure of IC Layouts Against Hardware Trojans
Due to cost benefits, supply chains of integrated circuits (ICs) are largely outsourced nowadays. However, passing ICs through various third-party providers gives rise to many threats, like piracy of IC intellectual property or insertion of hardware Trojans, i.e., malicious circuit modifications. In this work, we proactively and systematically harden the physical layouts of ICs against post-design insertion of Trojans. Toward that end, we propose a multiplexer-based logic-locking scheme that is (i) devised for layout-level Trojan prevention, (ii) resilient against state-of-the-art, oracle-less machine learning attacks, and (iii) fully integrated into a tailored, yet generic, commercial-grade design flow. Our work provides in-depth security and layout analysis on a challenging benchmark suite. We show that ours can render layouts resilient, with reasonable overheads, against Trojan insertion in general and also against second-order attacks (i.e., adversaries seeking to bypass the locking defense in an oracle-less setting). We release our layout artifacts for independent verification [29] and we will release our methodology's source code.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
330,443
1701.08053
Data Warehouse Benchmarking with DWEB
Performance evaluation is a key issue for designers and users of Database Management Systems (DBMSs). Performance is generally assessed with software benchmarks that help, e.g., test architectural choices, compare different technologies or tune a system. In the particular context of data warehousing and On-Line Analytical Processing (OLAP), although the Transaction Processing Performance Council (TPC) aims at issuing standard decision-support benchmarks, few benchmarks do actually exist. We present in this chapter the Data Warehouse Engineering Benchmark (DWEB), which allows generating various ad-hoc synthetic data warehouses and workloads. DWEB is fully parameterized to fulfill various data warehouse design needs. However, two levels of parameterization keep it relatively easy to tune. We also expand on our previous work on DWEB by presenting its new Extract, Transform, and Load (ETL) feature as well as its new execution protocol. A Java implementation of DWEB is freely available on-line, which can be interfaced with most existing relational DMBSs. To the best of our knowledge, DWEB is the only easily available, up-to-date benchmark for data warehouses.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
67,394
2402.02738
Improving Robustness of LiDAR-Camera Fusion Model against Weather Corruption from Fusion Strategy Perspective
In recent years, LiDAR-camera fusion models have markedly advanced 3D object detection tasks in autonomous driving. However, their robustness against common weather corruption such as fog, rain, snow, and sunlight in the intricate physical world remains underexplored. In this paper, we evaluate the robustness of fusion models from the perspective of fusion strategies on the corrupted dataset. Based on the evaluation, we further propose a concise yet practical fusion strategy to enhance the robustness of the fusion models, namely flexibly weighted fusing features from LiDAR and camera sources to adapt to varying weather scenarios. Experiments conducted on four types of fusion models, each with two distinct lightweight implementations, confirm the broad applicability and effectiveness of the approach.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
426,712
1608.03235
Gaze2Segment: A Pilot Study for Integrating Eye-Tracking Technology into Medical Image Segmentation
This study introduced a novel system, called Gaze2Segment, integrating biological and computer vision techniques to support radiologists' reading experience with an automatic image segmentation task. During diagnostic assessment of lung CT scans, the radiologists' gaze information were used to create a visual attention map. This map was then combined with a computer-derived saliency map, extracted from the gray-scale CT images. The visual attention map was used as an input for indicating roughly the location of a object of interest. With computer-derived saliency information, on the other hand, we aimed at finding foreground and background cues for the object of interest. At the final step, these cues were used to initiate a seed-based delineation process. Segmentation accuracy of the proposed Gaze2Segment was found to be 86% with dice similarity coefficient and 1.45 mm with Hausdorff distance. To the best of our knowledge, Gaze2Segment is the first true integration of eye-tracking technology into a medical image segmentation task without the need for any further user-interaction.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
59,653
2006.16202
Partitioned Least Squares
In this paper we propose a variant of the linear least squares model allowing practitioners to partition the input features into groups of variables that they require to contribute similarly to the final result. The output allows practitioners to assess the importance of each group and of each variable in the group. We formally show that the new formulation is not convex and provide two alternative methods to deal with the problem: one non-exact method based on an alternating least squares approach; and one exact method based on a reformulation of the problem using an exponential number of sub-problems whose minimum is guaranteed to be the optimal solution. We formally show the correctness of the exact method and also compare the two solutions showing that the exact solution provides better results in a fraction of the time required by the alternating least squares solution (assuming that the number of partitions is small). For the sake of completeness, we also provide an alternative branch and bound algorithm that can be used in place of the exact method when the number of partitions is too large, and a proof of NP-completeness of the optimization problem introduced in this paper.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
184,742
2501.03358
Data integrity vs. inference accuracy in large AIS datasets
Automatic Ship Identification Systems (AIS) play a key role in monitoring maritime traffic, providing the data necessary for analysis and decision-making. The integrity of this data is fundamental to the correctness of infer-ence and decision-making in the context of maritime safety, traffic manage-ment and environmental protection. This paper analyzes the impact of data integrity in large AIS datasets, on classification accuracy. It also presents er-ror detection and correction methods and data verification techniques that can improve the reliability of AIS systems. The results show that improving the integrity of AIS data significantly improves the quality of inference, which has a direct impact on operational efficiency and safety at sea.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
522,846
2501.06802
Unifying Two Types of Scaling Laws from the Perspective of Conditional Kolmogorov Complexity
In 2020, OpenAI proposed the first type of Scaling Laws, describing the relationships between model loss and the scale of parameters, data, and training computation. In 2024, OpenAI proposed the second type of Scaling Laws, describing the relationship between model inference performance and inference computation. In this paper, we analyze LLMs training and inference processes from the perspective of lossless compression using conditional Kolmogorov complexity, and unify these two types of Scaling Laws. We find that both types of Scaling Laws improve approximation of conditional Kolmogorov complexity by increasing execution steps of Turing machine. The first type of Scaling Laws increases execution steps by increasing number of model parameters. The second type of Scaling Laws increases execution steps by increasing the number of intermediate tokens.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
524,132
1712.09469
Closed-Form Coverage Probability for Downlink Poisson Network with Double Shadowed Fading
Performances of cellular networks over {\kappa}-{\mu} shadowed fading with long-term shadowing has been studied in the existing literature. However, the impact of {\kappa}-{\mu} shadowed fading with instantaneous shadowing on performances of cellular networks is unknown. Therefore, this letter analyzes the downlink coverage probability of a Poisson network with double shadowed fading which is composed of a large-scale fading of lognormal distribution and {\kappa}-{\mu} shadowed fading with integer fading parameters. The closest base station association rule without shadowing is considered. For analytical tractability, the double shadowed fading is approximated as a weighted sum of {\kappa}-{\mu} shadowed distributions based on the Gaussian-Hermit quadrature. As a main theoretical result, a closed-form expression for the downlink coverage probability of a Poisson network under double shadowed fading for the desired signal and arbitrary fading for the interfering signals is successfully derived. Numerical simulations reveal that the double shadowed fading provides a pessimistic coverage on a Poisson network compared with the long-term shadowing which is incorporated into cell selection.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
87,359
2010.12831
Unsupervised Vision-and-Language Pre-training Without Parallel Images and Captions
Pre-trained contextual vision-and-language (V&L) models have achieved impressive performance on various benchmarks. However, existing models require a large amount of parallel image-caption data for pre-training. Such data are costly to collect and require cumbersome curation. Inspired by unsupervised machine translation, we investigate if a strong V&L representation model can be learned through unsupervised pre-training without image-caption corpora. In particular, we propose to conduct ``mask-and-predict'' pre-training on text-only and image-only corpora and introduce the object tags detected by an object recognition model as anchor points to bridge two modalities. We find that such a simple approach achieves performance close to a model pre-trained with aligned data, on four English V&L benchmarks. Our work challenges the widely held notion that aligned data is necessary for V&L pre-training, while significantly reducing the amount of supervision needed for V&L models.
false
false
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
202,879
2309.00314
ARFA: An Asymmetric Receptive Field Autoencoder Model for Spatiotemporal Prediction
Spatiotemporal prediction aims to generate future sequences by paradigms learned from historical contexts. It is essential in numerous domains, such as traffic flow prediction and weather forecasting. Recently, research in this field has been predominantly driven by deep neural networks based on autoencoder architectures. However, existing methods commonly adopt autoencoder architectures with identical receptive field sizes. To address this issue, we propose an Asymmetric Receptive Field Autoencoder (ARFA) model, which introduces corresponding sizes of receptive field modules tailored to the distinct functionalities of the encoder and decoder. In the encoder, we present a large kernel module for global spatiotemporal feature extraction. In the decoder, we develop a small kernel module for local spatiotemporal information reconstruction. Experimental results demonstrate that ARFA consistently achieves state-of-the-art performance on popular datasets. Additionally, we construct the RainBench, a large-scale radar echo dataset for precipitation prediction, to address the scarcity of meteorological data in the domain.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
389,266
2301.05157
Statistical Learning with Sublinear Regret of Propagator Models
We consider a class of learning problems in which an agent liquidates a risky asset while creating both transient price impact driven by an unknown convolution propagator and linear temporary price impact with an unknown parameter. We characterize the trader's performance as maximization of a revenue-risk functional, where the trader also exploits available information on a price predicting signal. We present a trading algorithm that alternates between exploration and exploitation phases and achieves sublinear regrets with high probability. For the exploration phase we propose a novel approach for non-parametric estimation of the price impact kernel by observing only the visible price process and derive sharp bounds on the convergence rate, which are characterised by the singularity of the propagator. These kernel estimation methods extend existing methods from the area of Tikhonov regularisation for inverse problems and are of independent interest. The bound on the regret in the exploitation phase is obtained by deriving stability results for the optimizer and value function of the associated class of infinite-dimensional stochastic control problems. As a complementary result we propose a regression-based algorithm to estimate the conditional expectation of non-Markovian signals and derive its convergence rate.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
340,273
2101.02538
MRNet: a Multi-scale Residual Network for EEG-based Sleep Staging
Sleep staging based on electroencephalogram (EEG) plays an important role in the clinical diagnosis and treatment of sleep disorders. In order to emancipate human experts from heavy labeling work, deep neural networks have been employed to formulate automated sleep staging systems recently. However, EEG signals lose considerable detailed information in network propagation, which affects the representation of deep features. To address this problem, we propose a new framework, called MRNet, for data-driven sleep staging by integrating a multi-scale feature fusion model and a Markov-based sequential correction algorithm. The backbone of MRNet is a residual block-based network, which performs as a feature extractor.Then the fusion model constructs a feature pyramid by concatenating the outputs from the different depths of the backbone, which can help the network better comprehend the signals in different scales. The Markov-based sequential correction algorithm is designed to reduce the output jitters generated by the classifier. The algorithm depends on a prior stage distribution associated with the sleep stage transition rule and the Markov chain. Experiment results demonstrate the competitive performance of our proposed approach on both accuracy and F1 score (e.g., 85.14% Acc and 78.91% F1 score on Sleep-EDFx, and 87.59% Acc and 79.62% F1 score on Sleep-EDF).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
214,664
2502.06029
DiTASK: Multi-Task Fine-Tuning with Diffeomorphic Transformations
Pre-trained Vision Transformers now serve as powerful tools for computer vision. Yet, efficiently adapting them for multiple tasks remains a challenge that arises from the need to modify the rich hidden representations encoded by the learned weight matrices, without inducing interference between tasks. Current parameter-efficient methods like LoRA, which apply low-rank updates, force tasks to compete within constrained subspaces, ultimately degrading performance. We introduce DiTASK a novel Diffeomorphic Multi-Task Fine-Tuning approach that maintains pre-trained representations by preserving weight matrix singular vectors, while enabling task-specific adaptations through neural diffeomorphic transformations of the singular values. By following this approach, DiTASK enables both shared and task-specific feature modulations with minimal added parameters. Our theoretical analysis shows that DITASK achieves full-rank updates during optimization, preserving the geometric structure of pre-trained features, and establishing a new paradigm for efficient multi-task learning (MTL). Our experiments on PASCAL MTL and NYUD show that DiTASK achieves state-of-the-art performance across four dense prediction tasks, using 75% fewer parameters than existing methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
531,889
2409.06018
Pioneering Precision in Lumbar Spine MRI Segmentation with Advanced Deep Learning and Data Enhancement
This study presents an advanced approach to lumbar spine segmentation using deep learning techniques, focusing on addressing key challenges such as class imbalance and data preprocessing. Magnetic resonance imaging (MRI) scans of patients with low back pain are meticulously preprocessed to accurately represent three critical classes: vertebrae, spinal canal, and intervertebral discs (IVDs). By rectifying class inconsistencies in the data preprocessing stage, the fidelity of the training data is ensured. The modified U-Net model incorporates innovative architectural enhancements, including an upsample block with leaky Rectified Linear Units (ReLU) and Glorot uniform initializer, to mitigate common issues such as the dying ReLU problem and improve stability during training. Introducing a custom combined loss function effectively tackles class imbalance, significantly improving segmentation accuracy. Evaluation using a comprehensive suite of metrics showcases the superior performance of this approach, outperforming existing methods and advancing the current techniques in lumbar spine segmentation. These findings hold significant advancements for enhanced lumbar spine MRI and segmentation diagnostic accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
486,973
2102.00390
AACP: Model Compression by Accurate and Automatic Channel Pruning
Channel pruning is formulated as a neural architecture search (NAS) problem recently. However, existing NAS-based methods are challenged by huge computational cost and inflexibility of applications. How to deal with multiple sparsity constraints simultaneously and speed up NAS-based channel pruning are still open challenges. In this paper, we propose a novel Accurate and Automatic Channel Pruning (AACP) method to address these problems. Firstly, AACP represents the structure of a model as a structure vector and introduces a pruning step vector to control the compressing granularity of each layer. Secondly, AACP utilizes Pruned Structure Accuracy Estimator (PSAE) to speed up the performance estimation process. Thirdly, AACP proposes Improved Differential Evolution (IDE) algorithm to search the optimal structure vector effectively. Because of IDE, AACP can deal with FLOPs constraint and model size constraint simultaneously and efficiently. Our method can be easily applied to various tasks and achieve state of the art performance. On CIFAR10, our method reduces $65\%$ FLOPs of ResNet110 with an improvement of $0.26\%$ top-1 accuracy. On ImageNet, we reduce $42\%$ FLOPs of ResNet50 with a small loss of $0.18\%$ top-1 accuracy and reduce $30\%$ FLOPs of MobileNetV2 with a small loss of $0.7\%$ top-1 accuracy. The source code will be released after publication.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
217,764
2210.04807
Efficient NTK using Dimensionality Reduction
Recently, neural tangent kernel (NTK) has been used to explain the dynamics of learning parameters of neural networks, at the large width limit. Quantitative analyses of NTK give rise to network widths that are often impractical and incur high costs in time and energy in both training and deployment. Using a matrix factorization technique, we show how to obtain similar guarantees to those obtained by a prior analysis while reducing training and inference resource costs. The importance of our result further increases when the input points' data dimension is in the same order as the number of input points. More generally, our work suggests how to analyze large width networks in which dense linear layers are replaced with a low complexity factorization, thus reducing the heavy dependence on the large width.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
322,584
2212.02073
Minimum Latency Deep Online Video Stabilization
We present a novel camera path optimization framework for the task of online video stabilization. Typically, a stabilization pipeline consists of three steps: motion estimating, path smoothing, and novel view rendering. Most previous methods concentrate on motion estimation, proposing various global or local motion models. In contrast, path optimization receives relatively less attention, especially in the important online setting, where no future frames are available. In this work, we adopt recent off-the-shelf high-quality deep motion models for motion estimation to recover the camera trajectory and focus on the latter two steps. Our network takes a short 2D camera path in a sliding window as input and outputs the stabilizing warp field of the last frame in the window, which warps the coming frame to its stabilized position. A hybrid loss is well-defined to constrain the spatial and temporal consistency. In addition, we build a motion dataset that contains stable and unstable motion pairs for the training. Extensive experiments demonstrate that our approach significantly outperforms state-of-the-art online methods both qualitatively and quantitatively and achieves comparable performance to offline methods. Our code and dataset are available at https://github.com/liuzhen03/NNDVS
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
334,674
1707.07187
Hipsters on Networks: How a Small Group of Individuals Can Lead to an Anti-Establishment Majority
The spread of opinions, memes, diseases, and "alternative facts" in a population depends both on the details of the spreading process and on the structure of the social and communication networks on which they spread. In this paper, we explore how \textit{anti-establishment} nodes (e.g., \textit{hipsters}) influence the spreading dynamics of two competing products. We consider a model in which spreading follows a deterministic rule for updating node states (which describe which product has been adopted) in which an adjustable fraction $p_{\rm Hip}$ of the nodes in a network are hipsters, who choose to adopt the product that they believe is the less popular of the two. The remaining nodes are conformists, who choose which product to adopt by considering which products their immediate neighbors have adopted. We simulate our model on both synthetic and real networks, and we show that the hipsters have a major effect on the final fraction of people who adopt each product: even when only one of the two products exists at the beginning of the simulations, a very small fraction of hipsters in a network can still cause the other product to eventually become the more popular one. To account for this behavior, we construct an approximation for the steady-state adoption fraction on $k$-regular trees in the limit of few hipsters. Additionally, our simulations demonstrate that a time delay $\tau$ in the knowledge of the product distribution in a population, as compared to immediate knowledge of product adoption among nearest neighbors, can have a large effect on the final distribution of product adoptions. Our simple model and analysis may help shed light on the road to success for anti-establishment choices in elections, as such success can arise rather generically in our model from a small number of anti-establishment individuals and ordinary processes of social influence on normal individuals.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
77,566
2409.13421
State space models, emergence, and ergodicity: How many parameters are needed for stable predictions?
How many parameters are required for a model to execute a given task? It has been argued that large language models, pre-trained via self-supervised learning, exhibit emergent capabilities such as multi-step reasoning as their number of parameters reach a critical scale. In the present work, we explore whether this phenomenon can analogously be replicated in a simple theoretical model. We show that the problem of learning linear dynamical systems -- a simple instance of self-supervised learning -- exhibits a corresponding phase transition. Namely, for every non-ergodic linear system there exists a critical threshold such that a learner using fewer parameters than said threshold cannot achieve bounded error for large sequence lengths. Put differently, in our model we find that tasks exhibiting substantial long-range correlation require a certain critical number of parameters -- a phenomenon akin to emergence. We also investigate the role of the learner's parametrization and consider a simple version of a linear dynamical system with hidden state -- an imperfectly observed random walk in $\mathbb{R}$. For this situation, we show that there exists no learner using a linear filter which can succesfully learn the random walk unless the filter length exceeds a certain threshold depending on the effective memory length and horizon of the problem.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
489,985
2303.07275
A Survey of Graph Prompting Methods: Techniques, Applications, and Challenges
The recent "pre-train, prompt, predict training" paradigm has gained popularity as a way to learn generalizable models with limited labeled data. The approach involves using a pre-trained model and a prompting function that applies a template to input samples, adding indicative context and reformulating target tasks as the pre-training task. However, the design of prompts could be a challenging and time-consuming process in complex tasks. The limitation can be addressed by using graph data, as graphs serve as structured knowledge repositories by explicitly modeling the interaction between entities. In this survey, we review prompting methods from the graph perspective, where prompting functions are augmented with graph knowledge. In particular, we introduce the basic concepts of graph prompt learning, organize the existing work of designing graph prompting functions, and describe their applications and future challenges. This survey will bridge the gap between graphs and prompt design to facilitate future methodology development.
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
false
false
351,196