id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2403.15800
MRC-based Nested Medical NER with Co-prediction and Adaptive Pre-training
In medical information extraction, medical Named Entity Recognition (NER) is indispensable, playing a crucial role in developing medical knowledge graphs, enhancing medical question-answering systems, and analyzing electronic medical records. The challenge in medical NER arises from the complex nested structures and sophisticated medical terminologies, distinguishing it from its counterparts in traditional domains. In response to these complexities, we propose a medical NER model based on Machine Reading Comprehension (MRC), which uses a task-adaptive pre-training strategy to improve the model's capability in the medical field. Meanwhile, our model introduces multiple word-pair embeddings and multi-granularity dilated convolution to enhance the model's representation ability and uses a combined predictor of Biaffine and MLP to improve the model's recognition performance. Experimental evaluations conducted on the CMeEE, a benchmark for Chinese nested medical NER, demonstrate that our proposed model outperforms the compared state-of-the-art (SOTA) models.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
440,752
2302.14704
Joint Spectrum and Power Allocation for V2X Communications with Imperfect CSI
In Vehicle-to-Everything (V2X) communication, the high mobility of vehicles generates the Doppler shift which leads to channel uncertainties. Moreover, the reasons for channel uncertainties also include the finite channel feedback, channels state information (CSI) loss and latency. With this concern, we formulate a joint spectrum and power allocation problem for V2X communication with imperfect CSI. Specifically, the sum capacity of cellular user equipments (CUEs) is maximized subject to the minimum Signal-to-Interference-and-Noise Ratio (SINR) requirements of CUEs and the outage probability constraints of vehicular user equipments (VUEs). Then, two different robust resource allocation approaches are designed to solve the problem. One is Bernstein Approximation-based Robust Resource Allocation approach. More specifically, Bernstein approximations are employed to convert the chance constraint into a calculable constraint, and Bisection search method is proposed to obtain the optimal allocation solution with low complexity. Then, for further reducing the computational complexity, Self-learning Robust Resource Allocation approach, which includes a learning method and an analytical mapping method, is proposed as the second approach. The learning method is devised to learn the uncertainty set which transforms the chance constraint into calculable constraints, and the analytical mapping method is proposed to obtain closed-form solutions of the resource allocation problem. Finally, the simulation results prove that the proposed approaches can improve the capacity of all CUEs effectively whilst ensuring the reliability of the channel.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
348,404
2002.00336
3D Object Detection on Point Clouds using Local Ground-aware and Adaptive Representation of scenes' surface
A novel, adaptive ground-aware, and cost-effective 3D Object Detection pipeline is proposed. The ground surface representation introduced in this paper, in comparison to its uni-planar counterparts (methods that model the surface of a whole 3D scene using single plane), is far more accurate while being ~10x faster. The novelty of the ground representation lies both in the way in which the ground surface of the scene is represented in Lidar perception problems, as well as in the (cost-efficient) way in which it is computed. Furthermore, the proposed object detection pipeline builds on the traditional two-stage object detection models by incorporating the ability to dynamically reason the surface of the scene, ultimately achieving a new state-of-the-art 3D object detection performance among the two-stage Lidar Object Detection pipelines.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
162,328
2301.08590
Improving Sketch Colorization using Adversarial Segmentation Consistency
We propose a new method for producing color images from sketches. Current solutions in sketch colorization either necessitate additional user instruction or are restricted to the "paired" translation strategy. We leverage semantic image segmentation from a general-purpose panoptic segmentation network to generate an additional adversarial loss function. The proposed loss function is compatible with any GAN model. Our method is not restricted to datasets with segmentation labels and can be applied to unpaired translation tasks as well. Using qualitative, and quantitative analysis, and based on a user study, we demonstrate the efficacy of our method on four distinct image datasets. On the FID metric, our model improves the baseline by up to 35 points. Our code, pretrained models, scripts to produce newly introduced datasets and corresponding sketch images are available at https://github.com/giddyyupp/AdvSegLoss.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
341,240
1601.02216
Physical Layer Security in Three-Tier Wireless Sensor Networks: A Stochastic Geometry Approach
This paper develops a tractable framework for exploiting the potential benefits of physical layer security in three-tier wireless sensor networks using stochastic geometry. In such networks, the sensing data from the remote sensors are collected by sinks with the help of access points, and the external eavesdroppers intercept the data transmissions.We focus on the secure transmission in two scenarios: i) the active sensors transmit their sensing data to the access points, and ii) the active access points forward the data to the sinks. We derive new compact expressions for the average secrecy rate in these two scenarios. We also derive a new compact expression for the overall average secrecy rate. Numerical results corroborate our analysis and show that multiple antennas at the access points can enhance the security of three-tier wireless sensor networks. Our results show that increasing the number of access points decreases the average secrecy rate between the access point and its associated sink. However, we find that increasing the number of access points first increases the overall average secrecy rate, with a critical value beyond which the overall average secrecy rate then decreases. When increasing the number of active sensors, both the average secrecy rate between the sensor and its associated access point and the overall average secrecy rate decrease. In contrast, increasing the number of sinks improves both the average secrecy rate between the access point and its associated sink, as well as the overall average secrecy rate.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
50,811
2404.11581
LLMTune: Accelerate Database Knob Tuning with Large Language Models
Database knob tuning is a critical challenge in the database community, aiming to optimize knob values to enhance database performance for specific workloads. DBMS often feature hundreds of tunable knobs, posing a significant challenge for DBAs to recommend optimal configurations. Consequently, many machine learning-based tuning methods have been developed to automate this process. Despite the introduction of various optimizers, practical applications have unveiled a new problem: they typically require numerous workload runs to achieve satisfactory performance, a process that is both time-consuming and resource-intensive. This inefficiency largely stems from the optimal configuration often being substantially different from the default setting, necessitating multiple iterations during tuning. Recognizing this, we argue that an effective starting point could significantly reduce redundant exploration in less efficient areas, thereby potentially speeding up the tuning process for the optimizers. Based on this assumption, we introduce LLMTune, a large language model-based configuration generator designed to produce an initial, high-quality configuration for new workloads. These generated configurations can then serve as starting points for various base optimizers, accelerating their tuning processes. To obtain training data for LLMTune's supervised fine-tuning, we have devised a new automatic data generation framework capable of efficiently creating a large number of <workload, configuration> pairs. We have conducted thorough experiments to evaluate LLMTune's effectiveness with different workloads, such as TPC-H and JOB. In comparison to leading methods, LLMTune demonstrates a quicker ability to identify superior configurations. For instance, with the challenging TPC-H workload, our LLMTune achieves a significant 15.6x speed-up ratio in finding the best-performing configurations.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
447,546
2101.03279
Investigating the Effect of Sensor Modalities in Multi-Sensor Detection-Prediction Models
Detection of surrounding objects and their motion prediction are critical components of a self-driving system. Recently proposed models that jointly address these tasks rely on a number of sensors to achieve state-of-the-art performance. However, this increases system complexity and may result in a brittle model that overfits to any single sensor modality while ignoring others, leading to reduced generalization. We focus on this important problem and analyze the contribution of sensor modalities towards the model performance. In addition, we investigate the use of sensor dropout to mitigate the above-mentioned issues, leading to a more robust, better-performing model on real-world driving data.
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
214,877
2410.23128
Leader-Follower 3D Formation for Underwater Robots
The schooling behavior of fish is hypothesized to confer many survival benefits, including foraging success, safety from predators, and energy savings through hydrodynamic interactions when swimming in formation. Underwater robot collectives may be able to achieve similar benefits in future applications, e.g. using formation control to achieve efficient spatial sampling for environmental monitoring. Although many theoretical algorithms exist for multi-robot formation control, they have not been tested in the underwater domain due to the fundamental challenges in underwater communication. Here we introduce a leader-follower strategy for underwater formation control that allows us to realize complex 3D formations, using purely vision-based perception and a reactive control algorithm that is low computation. We use a physical platform, BlueSwarm, to demonstrate for the first time an experimental realization of inline, side-by-side, and staggered swimming 3D formations. More complex formations are studied in a physics-based simulator, providing new insights into the convergence and stability of formations given underwater inertial/drag conditions. Our findings lay the groundwork for future applications of underwater robot swarms in aquatic environments with minimal communication.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
503,911
2202.09813
Real-time Emotion Appraisal with Circumplex Model for Human-Robot Interaction
Emotions are the intrinsic or extrinsic representations of our experiences. The importance of emotions during a human-human interaction is immense as it formulates the basis of our interaction framework. There are several approaches in psychology to evaluate emotional states in humans based on the perceived stimuli. However, the topic has been less explored as far as human-robot interaction is concerned. This paper uses an appropriate emotion appraisal mechanism from psychology, generating an emotional state in a humanoid robot on-the-fly during human-robot interaction. Since the exhibition of only six basic emotions is not sufficient to cater to diverse situations, the use of the Circumplex Model in this work has allowed the life-sized robot called ROBIN to experience 28 emotional states in different interaction scenarios. Realistic robot behaviour has been generated based on the proposed appraisal system in various interaction scenarios.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
281,327
1102.1480
Joint Decoding of LDPC Codes and Finite-State Channels via Linear-Programming
This paper considers the joint-decoding (JD) problem for finite-state channels (FSCs) and low-density parity-check (LDPC) codes. In the first part, the linear-programming (LP) decoder for binary linear codes is extended to JD of binary-input FSCs. In particular, we provide a rigorous definition of LP joint-decoding pseudo-codewords (JD-PCWs) that enables evaluation of the pairwise error probability between codewords and JD-PCWs in AWGN. This leads naturally to a provable upper bound on decoder failure probability. If the channel is a finite-state intersymbol interference channel, then the joint LP decoder also has the maximum-likelihood (ML) certificate property and all integer-valued solutions are codewords. In this case, the performance loss relative to ML decoding can be explained completely by fractional-valued JD-PCWs. After deriving these results, we discovered some elements were equivalent to earlier work by Flanagan on LP receivers. In the second part, we develop an efficient iterative solver for the joint LP decoder discussed in the first part. In particular, we extend the approach of iterative approximate LP decoding, proposed by Vontobel and Koetter and analyzed by Burshtein, to this problem. By taking advantage of the dual-domain structure of the JD-LP, we obtain a convergent iterative algorithm for joint LP decoding whose structure is similar to BCJR-based turbo equalization (TE). The result is a joint iterative decoder whose per-iteration complexity is similar to that of TE but whose performance is similar to that of joint LP decoding. The main advantage of this decoder is that it appears to provide the predictability of joint LP decoding and superior performance with the computational complexity of TE. One expected application is coding for magnetic storage where the required block-error rate is extremely low and system performance is difficult to verify by simulation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
9,071
2404.07292
Solving Masked Jigsaw Puzzles with Diffusion Vision Transformers
Solving image and video jigsaw puzzles poses the challenging task of rearranging image fragments or video frames from unordered sequences to restore meaningful images and video sequences. Existing approaches often hinge on discriminative models tasked with predicting either the absolute positions of puzzle elements or the permutation actions applied to the original data. Unfortunately, these methods face limitations in effectively solving puzzles with a large number of elements. In this paper, we propose JPDVT, an innovative approach that harnesses diffusion transformers to address this challenge. Specifically, we generate positional information for image patches or video frames, conditioned on their underlying visual content. This information is then employed to accurately assemble the puzzle pieces in their correct positions, even in scenarios involving missing pieces. Our method achieves state-of-the-art performance on several datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
445,786
2403.17599
Coimagining the Future of Voice Assistants with Cultural Sensitivity
Voice assistants (VAs) are becoming a feature of our everyday life. Yet, the user experience (UX) is often limited, leading to underuse, disengagement, and abandonment. Co-designing interactions for VAs with potential end-users can be useful. Crowdsourcing this process online and anonymously may add value. However, most work has been done in the English-speaking West on dialogue data sets. We must be sensitive to cultural differences in language, social interactions, and attitudes towards technology. Our aims were to explore the value of co-designing VAs in the non-Western context of Japan and demonstrate the necessity of cultural sensitivity. We conducted an online elicitation study (N = 135) where Americans (n = 64) and Japanese people (n = 71) imagined dialogues (N = 282) and activities (N = 73) with future VAs. We discuss the implications for coimagining interactions with future VAs, offer design guidelines for the Japanese and English-speaking US contexts, and suggest opportunities for cultural plurality in VA design and scholarship.
true
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
441,533
2211.14928
Class-based Quantization for Neural Networks
In deep neural networks (DNNs), there are a huge number of weights and multiply-and-accumulate (MAC) operations. Accordingly, it is challenging to apply DNNs on resource-constrained platforms, e.g., mobile phones. Quantization is a method to reduce the size and the computational complexity of DNNs. Existing quantization methods either require hardware overhead to achieve a non-uniform quantization or focus on model-wise and layer-wise uniform quantization, which are not as fine-grained as filter-wise quantization. In this paper, we propose a class-based quantization method to determine the minimum number of quantization bits for each filter or neuron in DNNs individually. In the proposed method, the importance score of each filter or neuron with respect to the number of classes in the dataset is first evaluated. The larger the score is, the more important the filter or neuron is and thus the larger the number of quantization bits should be. Afterwards, a search algorithm is adopted to exploit the different importance of filters and neurons to determine the number of quantization bits of each filter or neuron. Experimental results demonstrate that the proposed method can maintain the inference accuracy with low bit-width quantization. Given the same number of quantization bits, the proposed method can also achieve a better inference accuracy than the existing methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
333,048
2307.06362
Spectral-Bias and Kernel-Task Alignment in Physically Informed Neural Networks
Physically informed neural networks (PINNs) are a promising emerging method for solving differential equations. As in many other deep learning approaches, the choice of PINN design and training protocol requires careful craftsmanship. Here, we suggest a comprehensive theoretical framework that sheds light on this important problem. Leveraging an equivalence between infinitely over-parameterized neural networks and Gaussian process regression (GPR), we derive an integro-differential equation that governs PINN prediction in the large data-set limit -- the neurally-informed equation. This equation augments the original one by a kernel term reflecting architecture choices and allows quantifying implicit bias induced by the network via a spectral decomposition of the source term in the original differential equation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
379,046
2311.10809
Extracting periodontitis diagnosis in clinical notes with RoBERTa and regular expression
This study aimed to utilize text processing and natural language processing (NLP) models to mine clinical notes for the diagnosis of periodontitis and to evaluate the performance of a named entity recognition (NER) model on different regular expression (RE) methods. Two complexity levels of RE methods were used to extract and generate the training data. The SpaCy package and RoBERTa transformer models were used to build the NER model and evaluate its performance with the manual-labeled gold standards. The comparison of the RE methods with the gold standard showed that as the complexity increased in the RE algorithms, the F1 score increased from 0.3-0.4 to around 0.9. The NER models demonstrated excellent predictions, with the simple RE method showing 0.84-0.92 in the evaluation metrics, and the advanced and combined RE method demonstrating 0.95-0.99 in the evaluation. This study provided an example of the benefit of combining NER methods and NLP models in extracting target information from free-text to structured data and fulfilling the need for missing diagnoses from unstructured notes.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
408,679
2007.01758
Collaborative Learning for Faster StyleGAN Embedding
The latent code of the recent popular model StyleGAN has learned disentangled representations thanks to the multi-layer style-based generator. Embedding a given image back to the latent space of StyleGAN enables wide interesting semantic image editing applications. Although previous works are able to yield impressive inversion results based on an optimization framework, which however suffers from the efficiency issue. In this work, we propose a novel collaborative learning framework that consists of an efficient embedding network and an optimization-based iterator. On one hand, with the progress of training, the embedding network gives a reasonable latent code initialization for the iterator. On the other hand, the updated latent code from the iterator in turn supervises the embedding network. In the end, high-quality latent code can be obtained efficiently with a single forward pass through our embedding network. Extensive experiments demonstrate the effectiveness and efficiency of our work.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
185,523
2408.09818
Liquid Fourier Latent Dynamics Networks for fast GPU-based numerical simulations in computational cardiology
Scientific Machine Learning (ML) is gaining momentum as a cost-effective alternative to physics-based numerical solvers in many engineering applications. In fact, scientific ML is currently being used to build accurate and efficient surrogate models starting from high-fidelity numerical simulations, effectively encoding the parameterized temporal dynamics underlying Ordinary Differential Equations (ODEs), or even the spatio-temporal behavior underlying Partial Differential Equations (PDEs), in appropriately designed neural networks. We propose an extension of Latent Dynamics Networks (LDNets), namely Liquid Fourier LDNets (LFLDNets), to create parameterized space-time surrogate models for multiscale and multiphysics sets of highly nonlinear differential equations on complex geometries. LFLDNets employ a neurologically-inspired, sparse, liquid neural network for temporal dynamics, relaxing the requirement of a numerical solver for time advancement and leading to superior performance in terms of tunable parameters, accuracy, efficiency and learned trajectories with respect to neural ODEs based on feedforward fully-connected neural networks. Furthermore, in our implementation of LFLDNets, we use a Fourier embedding with a tunable kernel in the reconstruction network to learn high-frequency functions better and faster than using space coordinates directly as input. We challenge LFLDNets in the framework of computational cardiology and evaluate their capabilities on two 3-dimensional test cases arising from multiscale cardiac electrophysiology and cardiovascular hemodynamics. This paper illustrates the capability to run Artificial Intelligence-based numerical simulations on single or multiple GPUs in a matter of minutes and represents a significant step forward in the development of physics-informed digital twins.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
481,601
2006.15975
Robots and COVID-19: Challenges in integrating robots for collaborative automation
Objective: The status of human-robot collaboration for assembly applications is reviewed and key current challenges for the research community and practitioners are presented. Background: As the pandemic of COVID-19 started to surface the manufacturers went under pressure to address demand challenges. Social distancing measures made fewer people available to work. In such situations, robots were pointed at to support humans to address a shortage in supply. An important activity where humans are needed in a manufacturing value chain is assembly. HRC assembly systems are supposed to safeguard coexisting humans, perform a range of actions, and often need to be reconfigured to handle product variety. This requires them to be resilient and adaptable to various configurations during their operational life. Besides the potential advantages of using robots the challenges of using them in an industrial assembly are enormous. Methods: This mini-review summarizes the challenges of industrial deployment of collaborative robots for assembly applications. Applications: The documented challenges highlight the future research directions in human-robot interaction for industrial applications.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
184,676
2112.14820
Application of Hierarchical Temporal Memory Theory for Document Categorization
The current work intends to study the performance of the Hierarchical Temporal Memory(HTM) theory for automated classification of text as well as documents. HTM is a biologically inspired theory based on the working principles of the human neocortex. The current study intends to provide an alternative framework for document categorization using the Spatial Pooler learning algorithm in the HTM Theory. As HTM accepts only a stream of binary data as input, Latent Semantic Indexing(LSI) technique is used for extracting the top features from the input and converting them into binary format. The Spatial Pooler algorithm converts the binary input into sparse patterns with similar input text having overlapping spatial patterns making it easy for classifying the patterns into categories. The results obtained prove that HTM theory, although is in its nascent stages, performs at par with most of the popular machine learning based classifiers.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
273,620
2410.10481
Model-Based Privacy-Preserving Knowledge Transfer for Large Language Models
As large language models (LLMs) become more prevalent, effectively utilizing domain-specific knowledge while ensuring privacy has become critical. Existing methods often struggle to balance utility and privacy. For instance, retrieval-augmented generation (RAG) enables LLMs to access domain-specific knowledge but compromises the privacy of sensitive data. On the other hand, differentially private data synthesis techniques offer strong privacy guarantees but often result in poor utility. To address this challenge, we propose Llamdex, a novel framework that enhances LLMs using only models trained on domain-specific data, integrated into LLMs through carefully designed connection modules. Our approach significantly enhances the accuracy of domain-specific tasks, achieving up to a 26% accuracy improvement compared to state-of-the-art data synthesis methods under the same differential privacy constraints. Experimental results show that Llamdex not only improves the accuracy of LLM responses but also maintains comparable inference efficiency to the original LLM, highlighting its potential for real applications.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
498,097
1810.12138
Audio inpainting of music by means of neural networks
We studied the ability of deep neural networks (DNNs) to restore missing audio content based on its context, a process usually referred to as audio inpainting. We focused on gaps in the range of tens of milliseconds. The proposed DNN structure was trained on audio signals containing music and musical instruments, separately, with 64-ms long gaps. The input to the DNN was the context, i.e., the signal surrounding the gap, transformed into time-frequency (TF) coefficients. Our results were compared to those obtained from a reference method based on linear predictive coding (LPC). For music, our DNN significantly outperformed the reference method, demonstrating a generally good usability of the proposed DNN structure for inpainting complex audio signals like music.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
111,691
2208.00767
Multimodal Neural Machine Translation with Search Engine Based Image Retrieval
Recently, numbers of works shows that the performance of neural machine translation (NMT) can be improved to a certain extent with using visual information. However, most of these conclusions are drawn from the analysis of experimental results based on a limited set of bilingual sentence-image pairs, such as Multi30K. In these kinds of datasets, the content of one bilingual parallel sentence pair must be well represented by a manually annotated image, which is different with the actual translation situation. Some previous works are proposed to addressed the problem by retrieving images from exiting sentence-image pairs with topic model. However, because of the limited collection of sentence-image pairs they used, their image retrieval method is difficult to deal with the out-of-vocabulary words, and can hardly prove that visual information enhance NMT rather than the co-occurrence of images and sentences. In this paper, we propose an open-vocabulary image retrieval methods to collect descriptive images for bilingual parallel corpus using image search engine. Next, we propose text-aware attentive visual encoder to filter incorrectly collected noise images. Experiment results on Multi30K and other two translation datasets show that our proposed method achieves significant improvements over strong baselines.
false
false
false
false
true
true
false
false
true
false
false
true
false
false
false
false
false
false
310,955
1711.09744
How linguistic descriptions of data can help to the teaching-learning process in higher education, case of study: artificial intelligence
Artificial Intelligence is a central topic in the computer science curriculum. From the year 2011 a project-based learning methodology based on computer games has been designed and implemented into the intelligence artificial course at the University of the Bio-Bio. The project aims to develop software-controlled agents (bots) which are programmed by using heuristic algorithms seen during the course. This methodology allows us to obtain good learning results, however several challenges have been founded during its implementation. In this paper we show how linguistic descriptions of data can help to provide students and teachers with technical and personalized feedback about the learned algorithms. Algorithm behavior profile and a new Turing test for computer games bots based on linguistic modelling of complex phenomena are also proposed in order to deal with such challenges. In order to show and explore the possibilities of this new technology, a web platform has been designed and implemented by one of authors and its incorporation in the process of assessment allows us to improve the teaching learning process.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
85,467
2310.10301
Multi-Body Neural Scene Flow
The test-time optimization of scene flow - using a coordinate network as a neural prior - has gained popularity due to its simplicity, lack of dataset bias, and state-of-the-art performance. We observe, however, that although coordinate networks capture general motions by implicitly regularizing the scene flow predictions to be spatially smooth, the neural prior by itself is unable to identify the underlying multi-body rigid motions present in real-world data. To address this, we show that multi-body rigidity can be achieved without the cumbersome and brittle strategy of constraining the $SE(3)$ parameters of each rigid body as done in previous works. This is achieved by regularizing the scene flow optimization to encourage isometry in flow predictions for rigid bodies. This strategy enables multi-body rigidity in scene flow while maintaining a continuous flow field, hence allowing dense long-term scene flow integration across a sequence of point clouds. We conduct extensive experiments on real-world datasets and demonstrate that our approach outperforms the state-of-the-art in 3D scene flow and long-term point-wise 4D trajectory prediction. The code is available at: https://github.com/kavisha725/MBNSF.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
400,166
2406.12334
What Did I Do Wrong? Quantifying LLMs' Sensitivity and Consistency to Prompt Engineering
Large Language Models (LLMs) changed the way we design and interact with software systems. Their ability to process and extract information from text has drastically improved productivity in a number of routine tasks. Developers that want to include these models in their software stack, however, face a dreadful challenge: debugging LLMs' inconsistent behavior across minor variations of the prompt. We therefore introduce two metrics for classification tasks, namely sensitivity and consistency, which are complementary to task performance. First, sensitivity measures changes of predictions across rephrasings of the prompt, and does not require access to ground truth labels. Instead, consistency measures how predictions vary across rephrasings for elements of the same class. We perform an empirical comparison of these metrics on text classification tasks, using them as guideline for understanding failure modes of the LLM. Our hope is that sensitivity and consistency will be helpful to guide prompt engineering and obtain LLMs that balance robustness with performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
465,349
2412.15298
A Comparative Study of DSPy Teleprompter Algorithms for Aligning Large Language Models Evaluation Metrics to Human Evaluation
We argue that the Declarative Self-improving Python (DSPy) optimizers are a way to align the large language model (LLM) prompts and their evaluations to the human annotations. We present a comparative analysis of five teleprompter algorithms, namely, Cooperative Prompt Optimization (COPRO), Multi-Stage Instruction Prompt Optimization (MIPRO), BootstrapFewShot, BootstrapFewShot with Optuna, and K-Nearest Neighbor Few Shot, within the DSPy framework with respect to their ability to align with human evaluations. As a concrete example, we focus on optimizing the prompt to align hallucination detection (using LLM as a judge) to human annotated ground truth labels for a publicly available benchmark dataset. Our experiments demonstrate that optimized prompts can outperform various benchmark methods to detect hallucination, and certain telemprompters outperform the others in at least these experiments.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
519,050
2205.15413
PolypConnect: Image inpainting for generating realistic gastrointestinal tract images with polyps
Early identification of a polyp in the lower gastrointestinal (GI) tract can lead to prevention of life-threatening colorectal cancer. Developing computer-aided diagnosis (CAD) systems to detect polyps can improve detection accuracy and efficiency and save the time of the domain experts called endoscopists. Lack of annotated data is a common challenge when building CAD systems. Generating synthetic medical data is an active research area to overcome the problem of having relatively few true positive cases in the medical domain. To be able to efficiently train machine learning (ML) models, which are the core of CAD systems, a considerable amount of data should be used. In this respect, we propose the PolypConnect pipeline, which can convert non-polyp images into polyp images to increase the size of training datasets for training. We present the whole pipeline with quantitative and qualitative evaluations involving endoscopists. The polyp segmentation model trained using synthetic data, and real data shows a 5.1% improvement of mean intersection over union (mIOU), compared to the model trained only using real data. The codes of all the experiments are available on GitHub to reproduce the results.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
299,712
1609.00651
Safety Barrier Certificates for Heterogeneous Multi-Robot Systems
This paper presents a formal framework for collision avoidance in multi-robot systems, wherein an existing controller is modified in a minimally invasive fashion to ensure safety. We build this framework through the use of control barrier functions (CBFs) which guarantee forward invariance of a safe set; these yield safety barrier certificates in the context of heterogeneous robot dynamics subject to acceleration bounds. Moreover, safety barrier certificates are extended to a distributed control framework, wherein neighboring agent dynamics are unknown, through local parameter identification. The end result is an optimization-based controller that formally guarantees collision free behavior in heterogeneous multi-agent systems by minimally modifying the desired controller via safety barrier constraints. This formal result is verified in simulation on a multi-robot system consisting of both cumbersome and agile robots, is demonstrated experimentally on a system with a Magellan Pro robot and three Khepera III robots.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
60,502
1409.8112
Efficient high-quality motion planning by fast all-pairs r-nearest-neighbors
Sampling-based motion-planning algorithms typically rely on nearest-neighbor (NN) queries when constructing a roadmap. Recent results suggest that in various settings NN queries may be the computational bottleneck of such algorithms. Moreover, in several asymptotically-optimal algorithms these NN queries are of a specific form: Given a set of points and a radius r report all pairs of points whose distance is at most r. This calls for an application-specific NN data structure tailored to efficiently answering this type of queries. Randomly transformed grids (RTG) were recently proposed by Aiger et al. as a tool to answer such queries and have been shown to outperform common implementations of NN data structures in this context. In this work we employ RTG for sampling-based motion-planning algorithms and describe an efficient implementation of the approach. We show that for motion-planning, RTG allow for faster convergence to high-quality solutions when compared with existing NN data structures. Additionally, RTG enable significantly shorter construction times for batched-PRM variants; specifically, we demonstrate a speedup by a factor of two to three for some scenarios.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
36,385
2405.19735
Twin Deformable Point Convolutions for Point Cloud Semantic Segmentation in Remote Sensing Scenes
Thanks to the application of deep learning technology in point cloud processing of the remote sensing field, point cloud segmentation has become a research hotspot in recent years, which can be applied to real-world 3D, smart cities, and other fields. Although existing solutions have made unprecedented progress, they ignore the inherent characteristics of point clouds in remote sensing fields that are strictly arranged according to latitude, longitude, and altitude, which brings great convenience to the segmentation of point clouds in remote sensing fields. To consider this property cleverly, we propose novel convolution operators, termed Twin Deformable point Convolutions (TDConvs), which aim to achieve adaptive feature learning by learning deformable sampling points in the latitude-longitude plane and altitude direction, respectively. First, to model the characteristics of the latitude-longitude plane, we propose a Cylinder-wise Deformable point Convolution (CyDConv) operator, which generates a two-dimensional cylinder map by constructing a cylinder-like grid in the latitude-longitude direction. Furthermore, to better integrate the features of the latitude-longitude plane and the spatial geometric features, we perform a multi-scale fusion of the extracted latitude-longitude features and spatial geometric features, and realize it through the aggregation of adjacent point features of different scales. In addition, a Sphere-wise Deformable point Convolution (SpDConv) operator is introduced to adaptively offset the sampling points in three-dimensional space by constructing a sphere grid structure, aiming at modeling the characteristics in the altitude direction. Experiments on existing popular benchmarks conclude that our TDConvs achieve the best segmentation performance, surpassing the existing state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
459,037
2412.19755
"Did my figure do justice to the answer?" : Towards Multimodal Short Answer Grading with Feedback (MMSAF)
Assessments play a vital role in a student's learning process by providing feedback on a student's proficiency level in a subject. While assessments often make use of short answer questions, it is often difficult to grade such questions at a large scale. Moreover, such questions often involve students drawing supporting diagrams along with their textual explanations. Such questions often promote multimodal literacy and are aligned with competency-based questions, which demand a deeper cognitive processing ability from students. However, existing literature does not deal with the automatic grading of such answers. Thus, to bridge this gap, we propose the Multimodal Short Answer Grading with Feedback (MMSAF) problem along with a dataset of 2197 data points. Additionally, we provide an automated framework for generating such datasets. Our evaluations on existing Large Language Models (LLMs) over this dataset achieved an overall accuracy of 55% on the Level of Correctness labels and 75% on Image Relevance labels. As per human experts, Pixtral was more aligned towards human judgement and values for biology and ChatGPT for physics and chemistry and achieved a score of 4 or more out of 5 in most parameters.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
520,956
2109.04689
Generating Self-Contained and Summary-Centric Question Answer Pairs via Differentiable Reward Imitation Learning
Motivated by suggested question generation in conversational news recommendation systems, we propose a model for generating question-answer pairs (QA pairs) with self-contained, summary-centric questions and length-constrained, article-summarizing answers. We begin by collecting a new dataset of news articles with questions as titles and pairing them with summaries of varying length. This dataset is used to learn a QA pair generation model producing summaries as answers that balance brevity with sufficiency jointly with their corresponding questions. We then reinforce the QA pair generation process with a differentiable reward function to mitigate exposure bias, a common problem in natural language generation. Both automatic metrics and human evaluation demonstrate these QA pairs successfully capture the central gists of the articles and achieve high answer accuracy.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
254,498
2308.15556
Polarized Speech on Online Platforms in the American Political Context
Using language models, we analyze approximately 2.5 billion comments from Reddit and Twitter across 1.7 million accounts spanning 2007 to 2023 to study the prevalence and evolution of polarized rhetoric in American political discourse. Our findings show rising outgroup polarization on both platforms, with each new cohort of users displaying higher polarization than the last, and a small fraction of users contributing disproportionately to negative polarization. Polarization varies consistently by topic: right-leaning users are twice as likely to use polarized rhetoric around immigration, while left-leaning users are more polarized on healthcare. On Twitter, U.S. politicians on the left have been consistently more polarized than everyday users, but in the past four years, politicians on the right have experienced the sharpest rise in polarization, surpassing journalists, media, and everyday users. Today, politicians, the group listened to the most for their political rhetoric, are far more polarized than everyday users. On Reddit, a few communities dominate polarized discussions, with the rise and eventual ban of r/TheDonald significantly shaping polarization trends on the right. Our large-scale analysis reveals previously unknown patterns of polarization across groups and topics on two major social media platforms.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
388,718
0708.2213
Moderate Growth Time Series for Dynamic Combinatorics Modelisation
Here, we present a family of time series with a simple growth constraint. This family can be the basis of a model to apply to emerging computation in business and micro-economy where global functions can be expressed from local rules. We explicit a double statistics on these series which allows to establish a one-to-one correspondence between three other ballot-like strunctures.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
557
2405.18937
Kestrel: Point Grounding Multimodal LLM for Part-Aware 3D Vision-Language Understanding
While 3D MLLMs have achieved significant progress, they are restricted to object and scene understanding and struggle to understand 3D spatial structures at the part level. In this paper, we introduce Kestrel, representing a novel approach that empowers 3D MLLMs with part-aware understanding, enabling better interpretation and segmentation grounding of 3D objects at the part level. Despite its significance, the current landscape lacks tasks and datasets that endow and assess this capability. Therefore, we propose two novel tasks: (1) Part-Aware Point Grounding, the model is tasked with directly predicting a part-level segmentation mask based on user instructions, and (2) Part-Aware Point Grounded Captioning, the model provides a detailed caption that includes part-level descriptions and their corresponding masks. To support learning and evaluating for these tasks, we introduce 3DCoMPaT Grounded Instructions Dataset (3DCoMPaT-GRIN). 3DCoMPaT-GRIN Vanilla, comprising 789k part-aware point cloud-instruction-segmentation mask triplets, is used to evaluate MLLMs' ability of part-aware segmentation grounding. 3DCoMPaT-GRIN Grounded Caption, containing 107k part-aware point cloud-instruction-grounded caption triplets, assesses both MLLMs' part-aware language comprehension and segmentation grounding capabilities. Our introduced tasks, dataset, and Kestrel represent a preliminary effort to bridge the gap between human cognition and 3D MLLMs, i.e., the ability to perceive and engage with the environment at both global and part levels. Extensive experiments on the 3DCoMPaT-GRIN show that Kestrel can generate user-specified segmentation masks, a capability not present in any existing 3D MLLM. Kestrel thus established a benchmark for evaluating the part-aware language comprehension and segmentation grounding of 3D objects. Project page at https://feielysia.github.io/Kestrel.github.io/
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
458,676
1802.10474
On the Benefits of Asymmetric Coded Cache Placement in Combination Networks with End-User Caches
This paper investigates the fundamental tradeoff between cache size and download time in the (H;r;M;N) combination network, where a server with N files is connected to H relays (without caches) and each of the K:=\binom{H}{r} users (with caches of size M files) is connected to a different subset of r relays. Existing schemes fall within two categories: either use the uncoded symmetric cache placement originally proposed for the shared-link model and design delivery phase dependent on the network topology, or effectively divide the combination network into H independent shared-link networks each serving \binom{H-1}{r-1} users; in either case, the placement phase is independent of network topology. In this paper, a novel strategy is proposed where the coded cache placement is dependent on network topology. The proposed scheme is shown to be information theoretically optimal for large cache size. In addition, when not exactly optimal, the proposed scheme can also outperform existing schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
91,532
2210.14510
Multi-Environment based Meta-Learning with CSI Fingerprints for Radio Based Positioning
Radio based positioning of a user equipment (UE) based on deep learning (DL) methods using channel state information (CSI) fingerprints have shown promising results. DL models are able to capture complex properties embedded in the CSI about a particular environment and map UE's CSI to the UE's position. However, the CSI fingerprints and the DL models trained on such fingerprints are highly dependent on a particular propagation environment, which generally limits the transfer of knowledge of the DL models from one environment to another. In this paper, we propose a DL model consisting of two parts: the first part aims to learn environment independent features while the second part combines those features depending on the particular environment. To improve transfer learning, we propose a meta learning scheme for training the first part over multiple environments. We show that for positioning in a new environment, initializing a DL model with the meta learned environment independent function achieves higher UE positioning accuracy compared to regular transfer learning from one environment to the new environment, or compared to training the DL model from scratch with only fingerprints from the new environment. Our proposed scheme is able to create an environment independent function which can embed knowledge from multiple environments and more effectively learn from a new environment.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
326,578
2403.09307
Annotation Free Semantic Segmentation with Vision Foundation Models
Semantic Segmentation is one of the most challenging vision tasks, usually requiring large amounts of training data with expensive pixel level annotations. With the success of foundation models and especially vision-language models, recent works attempt to achieve zeroshot semantic segmentation while requiring either large-scale training or additional image/pixel level annotations. In this work, we generate free annotations for any semantic segmentation dataset using existing foundation models. We use CLIP to detect objects and SAM to generate high quality object masks. Next, we build a lightweight module on top of a self-supervised vision encoder, DinoV2, to align the patch features with a pretrained text encoder for zeroshot semantic segmentation. Our approach can bring language-based semantics to any pretrained vision encoder with minimal training, uses foundation models as the sole source of supervision and generalizes from little training data with no annotation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
437,716
2208.14167
Synthehicle: Multi-Vehicle Multi-Camera Tracking in Virtual Cities
Smart City applications such as intelligent traffic routing or accident prevention rely on computer vision methods for exact vehicle localization and tracking. Due to the scarcity of accurately labeled data, detecting and tracking vehicles in 3D from multiple cameras proves challenging to explore. We present a massive synthetic dataset for multiple vehicle tracking and segmentation in multiple overlapping and non-overlapping camera views. Unlike existing datasets, which only provide tracking ground truth for 2D bounding boxes, our dataset additionally contains perfect labels for 3D bounding boxes in camera- and world coordinates, depth estimation, and instance, semantic and panoptic segmentation. The dataset consists of 17 hours of labeled video material, recorded from 340 cameras in 64 diverse day, rain, dawn, and night scenes, making it the most extensive dataset for multi-target multi-camera tracking so far. We provide baselines for detection, vehicle re-identification, and single- and multi-camera tracking. Code and data are publicly available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
315,242
2407.19332
A Semi-supervised Fake News Detection using Sentiment Encoding and LSTM with Self-Attention
Micro-blogs and cyber-space social networks are the main communication mediums to receive and share news nowadays. As a side effect, however, the networks can disseminate fake news that harms individuals and the society. Several methods have been developed to detect fake news, but the majority require large sets of manually labeled data to attain the application-level accuracy. Due to the strict privacy policies, the required data are often inaccessible or limited to some specific topics. On the other side, quite diverse and abundant unlabeled data on social media suggests that with a few labeled data, the problem of detecting fake news could be tackled via semi-supervised learning. Here, we propose a semi-supervised self-learning method in which a sentiment analysis is acquired by some state-of-the-art pretrained models. Our learning model is trained in a semi-supervised fashion and incorporates LSTM with self-attention layers. We benchmark our model on a dataset with 20,000 news content along with their feedback, which shows better performance in precision, recall, and measures compared to competitive methods in fake news detection.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
476,743
2011.00747
Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation
We introduce dual-decoder Transformer, a new model architecture that jointly performs automatic speech recognition (ASR) and multilingual speech translation (ST). Our models are based on the original Transformer architecture (Vaswani et al., 2017) but consist of two decoders, each responsible for one task (ASR or ST). Our major contribution lies in how these decoders interact with each other: one decoder can attend to different information sources from the other via a dual-attention mechanism. We propose two variants of these architectures corresponding to two different levels of dependencies between the decoders, called the parallel and cross dual-decoder Transformers, respectively. Extensive experiments on the MuST-C dataset show that our models outperform the previously-reported highest translation performance in the multilingual settings, and outperform as well bilingual one-to-one results. Furthermore, our parallel models demonstrate no trade-off between ASR and ST compared to the vanilla multi-task architecture. Our code and pre-trained models are available at https://github.com/formiel/speech-translation.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
204,349
2203.06605
Depth-Aware Generative Adversarial Network for Talking Head Video Generation
Talking head video generation aims to produce a synthetic human face video that contains the identity and pose information respectively from a given source image and a driving video.Existing works for this task heavily rely on 2D representations (e.g. appearance and motion) learned from the input images. However, dense 3D facial geometry (e.g. pixel-wise depth) is extremely important for this task as it is particularly beneficial for us to essentially generate accurate 3D face structures and distinguish noisy information from the possibly cluttered background. Nevertheless, dense 3D geometry annotations are prohibitively costly for videos and are typically not available for this video generation task. In this paper, we first introduce a self-supervised geometry learning method to automatically recover the dense 3D geometry (i.e.depth) from the face videos without the requirement of any expensive 3D annotation data. Based on the learned dense depth maps, we further propose to leverage them to estimate sparse facial keypoints that capture the critical movement of the human head. In a more dense way, the depth is also utilized to learn 3D-aware cross-modal (i.e. appearance and depth) attention to guide the generation of motion fields for warping source image representations. All these contributions compose a novel depth-aware generative adversarial network (DaGAN) for talking head generation. Extensive experiments conducted demonstrate that our proposed method can generate highly realistic faces, and achieve significant results on the unseen human faces.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
285,171
2012.01359
3D architected isotropic materials with tunable stiffness and buckling strength
This paper presents a class of 3D single-scale isotropic materials with tunable stiffness and buckling strength obtained via topology optimization and subsequent shape optimization. Compared to stiffness-optimal closed-cell plate material, the material class reduces the Young's modulus to a range from 79% to 58%, but improves the uniaxial buckling strength to a range from 180% to 767%. Based on small deformation theory, material stiffness is evaluated using the homogenization method. Buckling strength under a given macroscopic stress state is estimated using linear buckling analysis with Block-Floquet boundary conditions to capture both short and long wavelength buckling modes. The 3D isotropic single-scale materials with tunable properties are designed using topology optimization, and are then further simplified using shape optimization. Both topology and shape optimized results demonstrate that material buckling strength can be significantly enhanced by hybrids between truss and variable thickness plate structures.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
209,396
2307.16169
StarSRGAN: Improving Real-World Blind Super-Resolution
The aim of blind super-resolution (SR) in computer vision is to improve the resolution of an image without prior knowledge of the degradation process that caused the image to be low-resolution. The State of the Art (SOTA) model Real-ESRGAN has advanced perceptual loss and produced visually compelling outcomes using more complex degradation models to simulate real-world degradations. However, there is still room to improve the super-resolved quality of Real-ESRGAN by implementing recent techniques. This research paper introduces StarSRGAN, a novel GAN model designed for blind super-resolution tasks that utilize 5 various architectures. Our model provides new SOTA performance with roughly 10% better on the MANIQA and AHIQ measures, as demonstrated by experimental comparisons with Real-ESRGAN. In addition, as a compact version, StarSRGAN Lite provides approximately 7.5 times faster reconstruction speed (real-time upsampling from 540p to 4K) but can still keep nearly 90% of image quality, thereby facilitating the development of a real-time SR experience for future research. Our codes are released at https://github.com/kynthesis/StarSRGAN.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
382,499
2110.09267
Boosting Image Outpainting with Semantic Layout Prediction
The objective of image outpainting is to extend image current border and generate new regions based on known ones. Previous methods adopt generative adversarial networks (GANs) to synthesize realistic images. However, the lack of explicit semantic representation leads to blurry and abnormal image pixels when the outpainting areas are complex and with various objects. In this work, we decompose the outpainting task into two stages. Firstly, we train a GAN to extend regions in semantic segmentation domain instead of image domain. Secondly, another GAN model is trained to synthesize real images based on the extended semantic layouts. The first model focuses on low frequent context such as sizes, classes and other semantic cues while the second model focuses on high frequent context like color and texture. By this design, our approach can handle semantic clues more easily and hence works better in complex scenarios. We evaluate our framework on various datasets and make quantitative and qualitative analysis. Experiments demonstrate that our method generates reasonable extended semantic layouts and images, outperforming state-of-the-art models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
261,748
1906.09340
Leveraging Reinforcement Learning Techniques for Effective Policy Adoption and Validation
Rewards and punishments in different forms are pervasive and present in a wide variety of decision-making scenarios. By observing the outcome of a sufficient number of repeated trials, one would gradually learn the value and usefulness of a particular policy or strategy. However, in a given environment, the outcomes resulting from different trials are subject to chance influence and variations. In learning about the usefulness of a given policy, significant costs are involved in systematically undertaking the sequential trials; therefore, in most learning episodes, one would wish to keep the cost within bounds by adopting learning stopping rules. In this paper, we examine the deployment of different stopping strategies in given learning environments which vary from highly stringent for mission critical operations to highly tolerant for non-mission critical operations, and emphasis is placed on the former with particular application to aviation safety. In policy evaluation, two sequential phases of learning are identified, and we describe the outcomes variations using a probabilistic model, with closedform expressions obtained for the key measures of performance. Decision rules that map the trial observations to policy choices are also formulated. In addition, simulation experiments are performed, which corroborate the validity of the theoretical results.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
136,126
2310.07018
NEWTON: Are Large Language Models Capable of Physical Reasoning?
Large Language Models (LLMs), through their contextualized representations, have been empirically proven to encapsulate syntactic, semantic, word sense, and common-sense knowledge. However, there has been limited exploration of their physical reasoning abilities, specifically concerning the crucial attributes for comprehending everyday objects. To address this gap, we introduce NEWTON, a repository and benchmark for evaluating the physics reasoning skills of LLMs. Further, to enable domain-specific adaptation of this benchmark, we present a pipeline to enable researchers to generate a variant of this benchmark that has been customized to the objects and attributes relevant for their application. The NEWTON repository comprises a collection of 2800 object-attribute pairs, providing the foundation for generating infinite-scale assessment templates. The NEWTON benchmark consists of 160K QA questions, curated using the NEWTON repository to investigate the physical reasoning capabilities of several mainstream language models across foundational, explicit, and implicit reasoning tasks. Through extensive empirical analysis, our results highlight the capabilities of LLMs for physical reasoning. We find that LLMs like GPT-4 demonstrate strong reasoning capabilities in scenario-based tasks but exhibit less consistency in object-attribute reasoning compared to humans (50% vs. 84%). Furthermore, the NEWTON platform demonstrates its potential for evaluating and enhancing language models, paving the way for their integration into physically grounded settings, such as robotic manipulation. Project site: https://newtonreasoning.github.io
false
false
false
false
true
false
false
true
true
false
false
false
false
false
false
false
false
false
398,795
2405.10046
A Preprocessing and Postprocessing Voxel-based Method for LiDAR Semantic Segmentation Improvement in Long Distance
In recent years considerable research in LiDAR semantic segmentation was conducted, introducing several new state of the art models. However, most research focuses on single-scan point clouds, limiting performance especially in long distance outdoor scenarios, by omitting time-sequential information. Moreover, varying-density and occlusions constitute significant challenges in single-scan approaches. In this paper we propose a LiDAR point cloud preprocessing and postprocessing method. This multi-stage approach, in conjunction with state of the art models in a multi-scan setting, aims to solve those challenges. We demonstrate the benefits of our method through quantitative evaluation with the given models in single-scan settings. In particular, we achieve significant improvements in mIoU performance of over 5 percentage point in medium range and over 10 percentage point in far range. This is essential for 3D semantic scene understanding in long distance as well as for applications where offline processing is permissible.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
454,631
2304.12130
Reconstructing Turbulent Flows Using Physics-Aware Spatio-Temporal Dynamics and Test-Time Refinement
Simulating turbulence is critical for many societally important applications in aerospace engineering, environmental science, the energy industry, and biomedicine. Large eddy simulation (LES) has been widely used as an alternative to direct numerical simulation (DNS) for simulating turbulent flows due to its reduced computational cost. However, LES is unable to capture all of the scales of turbulent transport accurately. Reconstructing DNS from low-resolution LES is critical for many scientific and engineering disciplines, but it poses many challenges to existing super-resolution methods due to the spatio-temporal complexity of turbulent flows. In this work, we propose a new physics-guided neural network for reconstructing the sequential DNS from low-resolution LES data. The proposed method leverages the partial differential equation that underlies the flow dynamics in the design of spatio-temporal model architecture. A degradation-based refinement method is also developed to enforce physical constraints and further reduce the accumulated reconstruction errors over long periods. The results on two different types of turbulent flow data confirm the superiority of the proposed method in reconstructing the high-resolution DNS data and preserving the physical characteristics of flow transport.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
360,090
1710.09288
Adversarial Deep Structured Nets for Mass Segmentation from Mammograms
Mass segmentation provides effective morphological features which are important for mass diagnosis. In this work, we propose a novel end-to-end network for mammographic mass segmentation which employs a fully convolutional network (FCN) to model a potential function, followed by a CRF to perform structured learning. Because the mass distribution varies greatly with pixel position, the FCN is combined with a position priori. Further, we employ adversarial training to eliminate over-fitting due to the small sizes of mammogram datasets. Multi-scale FCN is employed to improve the segmentation performance. Experimental results on two public datasets, INbreast and DDSM-BCRP, demonstrate that our end-to-end network achieves better performance than state-of-the-art approaches. \footnote{https://github.com/wentaozhu/adversarial-deep-structural-networks.git}
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
true
false
false
83,183
1705.08620
Robust Data Geometric Structure Aligned Close yet Discriminative Domain Adaptation
Domain adaptation (DA) is transfer learning which aims to leverage labeled data in a related source domain to achieve informed knowledge transfer and help the classification of unlabeled data in a target domain. In this paper, we propose a novel DA method, namely Robust Data Geometric Structure Aligned, Close yet Discriminative Domain Adaptation (RSA-CDDA), which brings closer, in a latent joint subspace, both source and target data distributions, and aligns inherent hidden source and target data geometric structures while performing discriminative DA in repulsing both interclass source and target data. The proposed method performs domain adaptation between source and target in solving a unified model, which incorporates data distribution constraints, in particular via a nonparametric distance, i.e., Maximum Mean Discrepancy (MMD), as well as constraints on inherent hidden data geometric structure segmentation and alignment between source and target, through low rank and sparse representation. RSA-CDDA achieves the search of a joint subspace in solving the proposed unified model through iterative optimization, alternating Rayleigh quotient algorithm and inexact augmented Lagrange multiplier algorithm. Extensive experiments carried out on standard DA benchmarks, i.e., 16 cross-domain image classification tasks, verify the effectiveness of the proposed method, which consistently outperforms the state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
74,057
2401.05412
Spatial-Related Sensors Matters: 3D Human Motion Reconstruction Assisted with Textual Semantics
Leveraging wearable devices for motion reconstruction has emerged as an economical and viable technique. Certain methodologies employ sparse Inertial Measurement Units (IMUs) on the human body and harness data-driven strategies to model human poses. However, the reconstruction of motion based solely on sparse IMUs data is inherently fraught with ambiguity, a consequence of numerous identical IMU readings corresponding to different poses. In this paper, we explore the spatial importance of multiple sensors, supervised by text that describes specific actions. Specifically, uncertainty is introduced to derive weighted features for each IMU. We also design a Hierarchical Temporal Transformer (HTT) and apply contrastive learning to achieve precise temporal and feature alignment of sensor data with textual semantics. Experimental results demonstrate our proposed approach achieves significant improvements in multiple metrics compared to existing methods. Notably, with textual supervision, our method not only differentiates between ambiguous actions such as sitting and standing but also produces more precise and natural motion.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
420,771
1904.05276
Advances in Natural Language Question Answering: A Review
Question Answering has recently received high attention from artificial intelligence communities due to the advancements in learning technologies. Early question answering models used rule-based approaches and moved to the statistical approach to address the vastly available information. However, statistical approaches are shown to underperform in handling the dynamic nature and the variation of language. Therefore, learning models have shown the capability of handling the dynamic nature and variations in language. Many deep learning methods have been introduced to question answering. Most of the deep learning approaches have shown to achieve higher results compared to machine learning and statistical methods. The dynamic nature of language has profited from the nonlinear learning in deep learning. This has created prominent success and a spike in work on question answering. This paper discusses the successes and challenges in question answering question answering systems and techniques that are used in these challenges.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
127,261
2310.10688
A decoder-only foundation model for time-series forecasting
Motivated by recent advances in large language models for Natural Language Processing (NLP), we design a time-series foundation model for forecasting whose out-of-the-box zero-shot performance on a variety of public datasets comes close to the accuracy of state-of-the-art supervised forecasting models for each individual dataset. Our model is based on pretraining a patched-decoder style attention model on a large time-series corpus, and can work well across different forecasting history lengths, prediction lengths and temporal granularities.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
400,347
2412.14846
Head and Neck Tumor Segmentation of MRI from Pre- and Mid-radiotherapy with Pre-training, Data Augmentation and Dual Flow UNet
Head and neck tumors and metastatic lymph nodes are crucial for treatment planning and prognostic analysis. Accurate segmentation and quantitative analysis of these structures require pixel-level annotation, making automated segmentation techniques essential for the diagnosis and treatment of head and neck cancer. In this study, we investigated the effects of multiple strategies on the segmentation of pre-radiotherapy (pre-RT) and mid-radiotherapy (mid-RT) images. For the segmentation of pre-RT images, we utilized: 1) a fully supervised learning approach, and 2) the same approach enhanced with pre-trained weights and the MixUp data augmentation technique. For mid-RT images, we introduced a novel computational-friendly network architecture that features separate encoders for mid-RT images and registered pre-RT images with their labels. The mid-RT encoder branch integrates information from pre-RT images and labels progressively during the forward propagation. We selected the highest-performing model from each fold and used their predictions to create an ensemble average for inference. In the final test, our models achieved a segmentation performance of 82.38% for pre-RT and 72.53% for mid-RT on aggregated Dice Similarity Coefficient (DSC) as HiLab. Our code is available at https://github.com/WltyBY/HNTS-MRG2024_train_code.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
518,879
2303.07143
Multi-Microphone Speaker Separation by Spatial Regions
We consider the task of region-based source separation of reverberant multi-microphone recordings. We assume pre-defined spatial regions with a single active source per region. The objective is to estimate the signals from the individual spatial regions as captured by a reference microphone while retaining a correspondence between signals and spatial regions. We propose a data-driven approach using a modified version of a state-of-the-art network, where different layers model spatial and spectro-temporal information. The network is trained to enforce a fixed mapping of regions to network outputs. Using speech from LibriMix, we construct a data set specifically designed to contain the region information. Additionally, we train the network with permutation invariant training. We show that both training methods result in a fixed mapping of regions to network outputs, achieve comparable performance, and that the networks exploit spatial information. The proposed network outperforms a baseline network by 1.5 dB in scale-invariant signal-to-distortion ratio.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
351,138
2211.16126
Joint Neural Architecture and Hyperparameter Search for Correlated Time Series Forecasting
Sensors in cyber-physical systems often capture interconnected processes and thus emit correlated time series (CTS), the forecasting of which enables important applications. The key to successful CTS forecasting is to uncover the temporal dynamics of time series and the spatial correlations among time series. Deep learning-based solutions exhibit impressive performance at discerning these aspects. In particular, automated CTS forecasting, where the design of an optimal deep learning architecture is automated, enables forecasting accuracy that surpasses what has been achieved by manual approaches. However, automated CTS solutions remain in their infancy and are only able to find optimal architectures for predefined hyperparameters and scale poorly to large-scale CTS. To overcome these limitations, we propose SEARCH, a joint, scalable framework, to automatically devise effective CTS forecasting models. Specifically, we encode each candidate architecture and accompanying hyperparameters into a joint graph representation. We introduce an efficient Architecture-Hyperparameter Comparator (AHC) to rank all architecture-hyperparameter pairs, and we then further evaluate the top-ranked pairs to select a final result. Extensive experiments on six benchmark datasets demonstrate that SEARCH not only eliminates manual efforts but also is capable of better performance than manually designed and existing automatically designed CTS models. In addition, it shows excellent scalability to large CTS.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
true
false
333,525
2008.00190
On Supervised Classification of Feature Vectors with Independent and Non-Identically Distributed Elements
In this paper, we investigate the problem of classifying feature vectors with mutually independent but non-identically distributed elements. First, we show the importance of this problem. Next, we propose a classifier and derive an analytical upper bound on its error probability. We show that the error probability goes to zero as the length of the feature vectors grows, even when there is only one training feature vector per label available. Thereby, we show that for this important problem at least one asymptotically optimal classifier exists. Finally, we provide numerical examples where we show that the performance of the proposed classifier outperforms conventional classification algorithms when the number of training data is small and the length of the feature vectors is sufficiently high.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
189,935
1602.07109
Variational Inference for On-line Anomaly Detection in High-Dimensional Time Series
Approximate variational inference has shown to be a powerful tool for modeling unknown complex probability distributions. Recent advances in the field allow us to learn probabilistic models of sequences that actively exploit spatial and temporal structure. We apply a Stochastic Recurrent Network (STORN) to learn robot time series data. Our evaluation demonstrates that we can robustly detect anomalies both off- and on-line.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
52,469
2411.14421
From RNNs to Foundation Models: An Empirical Study on Commercial Building Energy Consumption
Accurate short-term energy consumption forecasting for commercial buildings is crucial for smart grid operations. While smart meters and deep learning models enable forecasting using past data from multiple buildings, data heterogeneity from diverse buildings can reduce model performance. The impact of increasing dataset heterogeneity in time series forecasting, while keeping size and model constant, is understudied. We tackle this issue using the ComStock dataset, which provides synthetic energy consumption data for U.S. commercial buildings. Two curated subsets, identical in size and region but differing in building type diversity, are used to assess the performance of various time series forecasting models, including fine-tuned open-source foundation models (FMs). The results show that dataset heterogeneity and model architecture have a greater impact on post-training forecasting performance than the parameter count. Moreover, despite the higher computational cost, fine-tuned FMs demonstrate competitive performance compared to base models trained from scratch.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
510,151
2211.10248
Circuit Design for Predictive Maintenance
Industry 4.0 has become a driver for the entire manufacturing industry. Smart systems have enabled 30% productivity increases and predictive maintenance has been demonstrated to provide a 50% reduction in machine downtime. So far, the solution has been based on data analytics which has resulted in a proliferation of sensing technologies and infrastructure for data acquisition, transmission and processing. At the core of factory operation and automation are circuits that control and power factory equipment, innovative circuit design has the potential to address many system integration challenges. We present a new circuit design approach based on circuit level artificial intelligence solutions, integrated within control and calibration functional blocks during circuit design, improving the predictability and adaptability of each component for predictive maintenance. This approach is envisioned to encourage the development of new EDA tools such as automatic digital shadow generation and product lifecycle models, that will help identification of circuit parameters that adequately define the operating conditions for dynamic prediction and fault detection. Integration of a supplementary artificial intelligence block within the control loop is considered for capturing non-linearities and gain/bandwidth constraints of the main controller and identifying changes in the operating conditions beyond the response of the controller. System integration topics are discussed regarding integration within OPC Unified Architecture and predictive maintenance interfaces, providing real-time updates to the digital shadow that help maintain an accurate, virtual replica model of the physical system.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
331,248
1408.3458
Stochastic Throughput Optimization for Two-hop Systems with Finite Relay Buffers
Optimal queueing control of multi-hop networks remains a challenging problem even in the simplest scenarios. In this paper, we consider a two-hop half-duplex relaying system with random channel connectivity. The relay is equipped with a finite buffer. We focus on stochastic link selection and transmission rate control to maximize the average system throughput subject to a half-duplex constraint. We formulate this stochastic optimization problem as an infinite horizon average cost Markov decision process (MDP), which is well-known to be a difficult problem. By using sample-path analysis and exploiting the specific problem structure, we first obtain an \emph{equivalent Bellman equation} with reduced state and action spaces. By using \emph{relative value iteration algorithm}, we analyze the properties of the value function of the MDP. Then, we show that the optimal policy has a threshold-based structure by characterizing the \emph{supermodularity} in the optimal control. Based on the threshold-based structure and Markov chain theory, we further simplify the original complex stochastic optimization problem to a static optimization problem over a small discrete feasible set and propose a low-complexity algorithm to solve the simplified static optimization problem by making use of its special structure. Furthermore, we obtain the closed-form optimal threshold for the symmetric case. The analytical results obtained in this paper also provide design insights for two-hop relaying systems with multiple relays equipped with finite relay buffers.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
35,377
2311.10042
Depth Insight -- Contribution of Different Features to Indoor Single-image Depth Estimation
Depth estimation from a single image is a challenging problem in computer vision because binocular disparity or motion information is absent. Whereas impressive performances have been reported in this area recently using end-to-end trained deep neural architectures, as to what cues in the images that are being exploited by these black box systems is hard to know. To this end, in this work, we quantify the relative contributions of the known cues of depth in a monocular depth estimation setting using an indoor scene data set. Our work uses feature extraction techniques to relate the single features of shape, texture, colour and saturation, taken in isolation, to predict depth. We find that the shape of objects extracted by edge detection substantially contributes more than others in the indoor setting considered, while the other features also have contributions in varying degrees. These insights will help optimise depth estimation models, boosting their accuracy and robustness. They promise to broaden the practical applications of vision-based depth estimation. The project code is attached to the supplementary material and will be published on GitHub.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
408,394
2312.06531
Uncertainty quantification in automated valuation models with spatially weighted conformal prediction
Non-parametric machine learning models, such as random forests and gradient boosted trees, are frequently used to estimate house prices due to their predictive accuracy, but a main drawback of such methods is their limited ability to quantify prediction uncertainty. Conformal prediction (CP) is a model-agnostic framework for constructing confidence sets around predictions of machine learning models with minimal assumptions. However, due to the spatial dependencies observed in house prices, direct application of CP leads to confidence sets that are not calibrated everywhere, i.e., the confidence sets will be too large in certain geographical regions and too small in others. We survey various approaches to adjust the CP confidence set to account for this and demonstrate their performance on a data set from the housing market in Oslo, Norway. Our findings indicate that calibrating the confidence sets on a spatially weighted version of the non-conformity scores makes the coverage more consistently calibrated across geographical regions. We also perform a simulation study on synthetically generated sale prices to empirically explore the performance of CP on housing market data under idealized conditions with known data-generating mechanisms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
414,554
2205.04424
Accelerated Reinforcement Learning for Temporal Logic Control Objectives
This paper addresses the problem of learning control policies for mobile robots, modeled as unknown Markov Decision Processes (MDPs), that are tasked with temporal logic missions, such as sequencing, coverage, or surveillance. The MDP captures uncertainty in the workspace structure and the outcomes of control decisions. The control objective is to synthesize a control policy that maximizes the probability of accomplishing a high-level task, specified as a Linear Temporal Logic (LTL) formula. To address this problem, we propose a novel accelerated model-based reinforcement learning (RL) algorithm for LTL control objectives that is capable of learning control policies significantly faster than related approaches. Its sample-efficiency relies on biasing exploration towards directions that may contribute to task satisfaction. This is accomplished by leveraging an automaton representation of the LTL task as well as a continuously learned MDP model. Finally, we provide comparative experiments that demonstrate the sample efficiency of the proposed method against recent RL methods for LTL objectives.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
295,635
2109.01229
Multimodal Conditionality for Natural Language Generation
Large scale pretrained language models have demonstrated state-of-the-art performance in language understanding tasks. Their application has recently expanded into multimodality learning, leading to improved representations combining vision and language. However, progress in adapting language models towards conditional Natural Language Generation (NLG) has been limited to a single modality, generally text. We propose MAnTiS, Multimodal Adaptation for Text Synthesis, a general approach for multimodal conditionality in transformer-based NLG models. In this method, we pass inputs from each modality through modality-specific encoders, project to textual token space, and finally join to form a conditionality prefix. We fine-tune the pretrained language model and encoders with the conditionality prefix guiding the generation. We apply MAnTiS to the task of product description generation, conditioning a network on both product images and titles to generate descriptive text. We demonstrate that MAnTiS outperforms strong baseline approaches on standard NLG scoring metrics. Furthermore, qualitative assessments demonstrate that MAnTiS can generate human quality descriptions consistent with given multimodal inputs.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
253,372
1610.05058
Local and global analysis of endocrine regulation as a non-cyclic feedback system
To understand the sophisticated control mechanisms of the human's endocrine system is a challenging task that is a crucial step towards precise medical treatment of many disfunctions and diseases. Although mathematical models describing the endocrine system as a whole are still elusive, recently some substantial progress has been made in analyzing theoretically its subsystems (or axes) that regulate production of specific hormones. Many of the relevant mathematical models are similar in structure to (or squarely based on) the celebrated Goodwin's oscillator. Such models are convenient to explain stable periodic oscillations at hormones' level by representing the corresponding endocrine regulation circuits as cyclic feedback systems. However, many real hormonal regulation mechanisms (in particular, testosterone regulation) are in fact known to have non-cyclic structures and involve multiple feedbacks; a Goodwin-type model thus represents only a part of such a complicated mechanism. In this paper, we examine a new mathematical model of hormonal regulation, obtained from the classical Goodwin's oscillator by introducing an additional negative feedback. Local stability properties of the proposed model are studied, and we show that the local instability of its unique equilibrium implies oscillatory behavior of almost all solutions. Furthermore, under additional restrictions we prove that almost all solutions converge to periodic ones.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
62,474
0906.0434
Total Variation, Adaptive Total Variation and Nonconvex Smoothly Clipped Absolute Deviation Penalty for Denoising Blocky Images
The total variation-based image denoising model has been generalized and extended in numerous ways, improving its performance in different contexts. We propose a new penalty function motivated by the recent progress in the statistical literature on high-dimensional variable selection. Using a particular instantiation of the majorization-minimization algorithm, the optimization problem can be efficiently solved and the computational procedure realized is similar to the spatially adaptive total variation model. Our two-pixel image model shows theoretically that the new penalty function solves the bias problem inherent in the total variation model. The superior performance of the new penalty is demonstrated through several experiments. Our investigation is limited to "blocky" images which have small total variation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
3,812
2106.06684
Multistream ValidNet: Improving 6D Object Pose Estimation by Automatic Multistream Validation
This work presents a novel approach to improve the results of pose estimation by detecting and distinguishing between the occurrence of True and False Positive results. It achieves this by training a binary classifier on the output of an arbitrary pose estimation algorithm, and returns a binary label indicating the validity of the result. We demonstrate that our approach improves upon a state-of-the-art pose estimation result on the Sil\'eane dataset, outperforming a variation of the alternative CullNet method by 4.15% in average class accuracy and 0.73% in overall accuracy at validation. Applying our method can also improve the pose estimation average precision results of Op-Net by 6.06% on average.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
240,576
2409.05712
Cooperative Decision-Making for CAVs at Unsignalized Intersections: A MARL Approach with Attention and Hierarchical Game Priors
The development of autonomous vehicles has shown great potential to enhance the efficiency and safety of transportation systems. However, the decision-making issue in complex human-machine mixed traffic scenarios, such as unsignalized intersections, remains a challenge for autonomous vehicles. While reinforcement learning (RL) has been used to solve complex decision-making problems, existing RL methods still have limitations in dealing with cooperative decision-making of multiple connected autonomous vehicles (CAVs), ensuring safety during exploration, and simulating realistic human driver behaviors. In this paper, a novel and efficient algorithm, Multi-Agent Game-prior Attention Deep Deterministic Policy Gradient (MA-GA-DDPG), is proposed to address these limitations. Our proposed algorithm formulates the decision-making problem of CAVs at unsignalized intersections as a decentralized multi-agent reinforcement learning problem and incorporates an attention mechanism to capture interaction dependencies between ego CAV and other agents. The attention weights between the ego vehicle and other agents are then used to screen interaction objects and obtain prior hierarchical game relations, based on which a safety inspector module is designed to improve the traffic safety. Furthermore, both simulation and hardware-in-the-loop experiments were conducted, demonstrating that our method outperforms other baseline approaches in terms of driving safety, efficiency, and comfort.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
486,871
1905.12781
Learning to Crawl
Web crawling is the problem of keeping a cache of webpages fresh, i.e., having the most recent copy available when a page is requested. This problem is usually coupled with the natural restriction that the bandwidth available to the web crawler is limited. The corresponding optimization problem was solved optimally by Azar et al. [2018] under the assumption that, for each webpage, both the elapsed time between two changes and the elapsed time between two requests follow a Poisson distribution with known parameters. In this paper, we study the same control problem but under the assumption that the change rates are unknown a priori, and thus we need to estimate them in an online fashion using only partial observations (i.e., single-bit signals indicating whether the page has changed since the last refresh). As a point of departure, we characterise the conditions under which one can solve the problem with such partial observability. Next, we propose a practical estimator and compute confidence intervals for it in terms of the elapsed time between the observations. Finally, we show that the explore-and-commit algorithm achieves an $\mathcal{O}(\sqrt{T})$ regret with a carefully chosen exploration horizon. Our simulation study shows that our online policy scales well and achieves close to optimal performance for a wide range of the parameters.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
132,873
2412.01941
Global Average Feature Augmentation for Robust Semantic Segmentation with Transformers
Robustness to out-of-distribution data is crucial for deploying modern neural networks. Recently, Vision Transformers, such as SegFormer for semantic segmentation, have shown impressive robustness to visual corruptions like blur or noise affecting the acquisition device. In this paper, we propose Channel Wise Feature Augmentation (CWFA), a simple yet efficient feature augmentation technique to improve the robustness of Vision Transformers for semantic segmentation. CWFA applies a globally estimated perturbation per encoder with minimal compute overhead during training. Extensive evaluations on Cityscapes and ADE20K, with three state-of-the-art Vision Transformer architectures : SegFormer, Swin Transformer, and Twins demonstrate that CWFA-enhanced models significantly improve robustness without affecting clean data performance. For instance, on Cityscapes, a CWFA-augmented SegFormer-B1 model yields up to 27.7% mIoU robustness gain on impulse noise compared to the non-augmented SegFormer-B1. Furthermore, CWFA-augmented SegFormer-B5 achieves a new state-of-the-art 84.3% retention rate, a 0.7% improvement over the recently published FAN+STL.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
513,315
2212.06630
Differentially Private Tree-Based Redescription Mining
Differential privacy provides a strong form of privacy and allows preserving most of the original characteristics of the dataset. Utilizing these benefits requires one to design specific differentially private data analysis algorithms. In this work, we present three tree-based algorithms for mining redescriptions while preserving differential privacy. Redescription mining is an exploratory data analysis method for finding connections between two views over the same entities, such as phenotypes and genotypes of medical patients, for example. It has applications in many fields, including some, like health care informatics, where privacy-preserving access to data is desired. Our algorithms are the first differentially private redescription mining algorithms, and we show via experiments that, despite the inherent noise in differential privacy, it can return trustworthy results even in smaller datasets where noise typically has a stronger effect.
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
336,171
2409.04507
3D Data Long-Term Preservation in Cultural Heritage
The report explores the challenges and strategies for preserving 3D digital data in cultural heritage. It discusses the issue of technological obsolescence, emphasising the need for ustainable storage solutions and ongoing data management strategies. Key topics include understanding technological obsolescence, the lifecycle of digital content, digital continuity, data management plans (DMP), FAIR principles, and the use of public repositories. The report also covers the importance of metadata in long-term digital preservation, including types of metadata and strategies for building valuable metadata. It examines the evolving standards and interoperability in 3D format preservation and the importance of managing metadata and paradata. The document provides a comprehensive overview of the challenges and solutions for preserving 3D cultural heritage data in the long term.
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
true
486,416
2211.13572
Physics-Based Object 6D-Pose Estimation during Non-Prehensile Manipulation
We propose a method to track the 6D pose of an object over time, while the object is under non-prehensile manipulation by a robot. At any given time during the manipulation of the object, we assume access to the robot joint controls and an image from a camera. We use the robot joint controls to perform a physics-based prediction of how the object might be moving. We then combine this prediction with the observation coming from the camera, to estimate the object pose as accurately as possible. We use a particle filtering approach to combine the control information with the visual information. We compare the proposed method with two baselines: (i) using only an image-based pose estimation system at each time-step, and (ii) a particle filter which does not perform the computationally expensive physics predictions, but assumes the object moves with constant velocity. Our results show that making physics-based predictions is worth the computational cost, resulting in more accurate tracking, and estimating object pose even when the object is not clearly visible to the camera.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
332,521
2301.13192
Robust empirical risk minimization via Newton's method
A new variant of Newton's method for empirical risk minimization is studied, where at each iteration of the optimization algorithm, the gradient and Hessian of the objective function are replaced by robust estimators taken from existing literature on robust mean estimation for multivariate data. After proving a general theorem about the convergence of successive iterates to a small ball around the population-level minimizer, consequences of the theory in generalized linear models are studied when data are generated from Huber's epsilon-contamination model and/or heavytailed distributions. An algorithm for obtaining robust Newton directions based on the conjugate gradient method is also proposed, which may be more appropriate for high-dimensional settings, and conjectures about the convergence of the resulting algorithm are offered. Compared to robust gradient descent, the proposed algorithm enjoys the faster rates of convergence for successive iterates often achieved by second-order algorithms for convex problems, i.e., quadratic convergence in a neighborhood of the optimum, with a stepsize that may be chosen adaptively via backtracking linesearch.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
342,815
2307.00306
SyMFM6D: Symmetry-aware Multi-directional Fusion for Multi-View 6D Object Pose Estimation
Detecting objects and estimating their 6D poses is essential for automated systems to interact safely with the environment. Most 6D pose estimators, however, rely on a single camera frame and suffer from occlusions and ambiguities due to object symmetries. We overcome this issue by presenting a novel symmetry-aware multi-view 6D pose estimator called SyMFM6D. Our approach efficiently fuses the RGB-D frames from multiple perspectives in a deep multi-directional fusion network and predicts predefined keypoints for all objects in the scene simultaneously. Based on the keypoints and an instance semantic segmentation, we efficiently compute the 6D poses by least-squares fitting. To address the ambiguity issues for symmetric objects, we propose a novel training procedure for symmetry-aware keypoint detection including a new objective function. Our SyMFM6D network significantly outperforms the state-of-the-art in both single-view and multi-view 6D pose estimation. We furthermore show the effectiveness of our symmetry-aware training procedure and demonstrate that our approach is robust towards inaccurate camera calibration and dynamic camera setups.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
376,956
2110.05850
Improving Binary Neural Networks through Fully Utilizing Latent Weights
Binary Neural Networks (BNNs) rely on a real-valued auxiliary variable W to help binary training. However, pioneering binary works only use W to accumulate gradient updates during backward propagation, which can not fully exploit its power and may hinder novel advances in BNNs. In this work, we explore the role of W in training besides acting as a latent variable. Notably, we propose to add W into the computation graph, making it perform as a real-valued feature extractor to aid the binary training. We make different attempts on how to utilize the real-valued weights and propose a specialized supervision. Visualization experiments qualitatively verify the effectiveness of our approach in making it easier to distinguish between different categories. Quantitative experiments show that our approach outperforms current state-of-the-arts, further closing the performance gap between floating-point networks and BNNs. Evaluation on ImageNet with ResNet-18 (Top-1 63.4%), ResNet-34 (Top-1 67.0%) achieves new state-of-the-art.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
260,427
1202.3750
Fractional Moments on Bandit Problems
Reinforcement learning addresses the dilemma between exploration to find profitable actions and exploitation to act according to the best observations already made. Bandit problems are one such class of problems in stateless environments that represent this explore/exploit situation. We propose a learning algorithm for bandit problems based on fractional expectation of rewards acquired. The algorithm is theoretically shown to converge on an eta-optimal arm and achieve O(n) sample complexity. Experimental results show the algorithm incurs substantially lower regrets than parameter-optimized eta-greedy and SoftMax approaches and other low sample complexity state-of-the-art techniques.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
14,422
1610.00029
Microscopic Pedestrian Flow Characteristics: Development of an Image Processing Data Collection and Simulation Model
Microscopic pedestrian studies consider detailed interaction of pedestrians to control their movement in pedestrian traffic flow. The tools to collect the microscopic data and to analyze microscopic pedestrian flow are still very much in its infancy. The microscopic pedestrian flow characteristics need to be understood. Manual, semi manual and automatic image processing data collection systems were developed. It was found that the microscopic speed resemble a normal distribution with a mean of 1.38 m/second and standard deviation of 0.37 m/second. The acceleration distribution also bear a resemblance to the normal distribution with an average of 0.68 m/ square second. A physical based microscopic pedestrian simulation model was also developed. Both Microscopic Video Data Collection and Microscopic Pedestrian Simulation Model generate a database called NTXY database. The formulations of the flow performance or microscopic pedestrian characteristics are explained. Sensitivity of the simulation and relationship between the flow performances are described. Validation of the simulation using real world data is then explained through the comparison between average instantaneous speed distributions of the real world data with the result of the simulations. The simulation model is then applied for some experiments on a hypothetical situation to gain more understanding of pedestrian behavior in one way and two way situations, to know the behavior of the system if the number of elderly pedestrian increases and to evaluate a policy of lane-like segregation toward pedestrian crossing and inspects the performance of the crossing. It was revealed that the microscopic pedestrian studies have been successfully applied to give more understanding to the behavior of microscopic pedestrians flow, predict the theoretical and practical situation and evaluate some design policies before its implementation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
61,780
2012.07237
Accurate Cell Segmentation in Digital Pathology Images via Attention Enforced Networks
Automatic cell segmentation is an essential step in the pipeline of computer-aided diagnosis (CAD), such as the detection and grading of breast cancer. Accurate segmentation of cells can not only assist the pathologists to make a more precise diagnosis, but also save much time and labor. However, this task suffers from stain variation, cell inhomogeneous intensities, background clutters and cells from different tissues. To address these issues, we propose an Attention Enforced Network (AENet), which is built on spatial attention module and channel attention module, to integrate local features with global dependencies and weight effective channels adaptively. Besides, we introduce a feature fusion branch to bridge high-level and low-level features. Finally, the marker controlled watershed algorithm is applied to post-process the predicted segmentation maps for reducing the fragmented regions. In the test stage, we present an individual color normalization method to deal with the stain variation problem. We evaluate this model on the MoNuSeg dataset. The quantitative comparisons against several prior methods demonstrate the superiority of our approach.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
211,388
1804.07875
Multi-lingual Common Semantic Space Construction via Cluster-consistent Word Embedding
We construct a multilingual common semantic space based on distributional semantics, where words from multiple languages are projected into a shared space to enable knowledge and resource transfer across languages. Beyond word alignment, we introduce multiple cluster-level alignments and enforce the word clusters to be consistently distributed across multiple languages. We exploit three signals for clustering: (1) neighbor words in the monolingual word embedding space; (2) character-level information; and (3) linguistic properties (e.g., apposition, locative suffix) derived from linguistic structure knowledge bases available for thousands of languages. We introduce a new cluster-consistent correlational neural network to construct the common semantic space by aligning words as well as clusters. Intrinsic evaluation on monolingual and multilingual QVEC tasks shows our approach achieves significantly higher correlation with linguistic features than state-of-the-art multi-lingual embedding learning methods do. Using low-resource language name tagging as a case study for extrinsic evaluation, our approach achieves up to 24.5\% absolute F-score gain over the state of the art.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
95,624
1305.0321
Hidden Markov Model Identifiability via Tensors
The prevalence of hidden Markov models (HMMs) in various applications of statistical signal processing and communications is a testament to the power and flexibility of the model. In this paper, we link the identifiability problem with tensor decomposition, in particular, the Canonical Polyadic decomposition. Using recent results in deriving uniqueness conditions for tensor decomposition, we are able to provide a necessary and sufficient condition for the identification of the parameters of discrete time finite alphabet HMMs. This result resolves a long standing open problem regarding the derivation of a necessary and sufficient condition for uniquely identifying an HMM. We then further extend recent preliminary work on the identification of HMMs with multiple observers by deriving necessary and sufficient conditions for identifiability in this setting.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
24,340
1806.09783
Understanding Dropout as an Optimization Trick
As one of standard approaches to train deep neural networks, dropout has been applied to regularize large models to avoid overfitting, and the improvement in performance by dropout has been explained as avoiding co-adaptation between nodes. However, when correlations between nodes are compared after training the networks with or without dropout, one question arises if co-adaptation avoidance explains the dropout effect completely. In this paper, we propose an additional explanation of why dropout works and propose a new technique to design better activation functions. First, we show that dropout can be explained as an optimization technique to push the input towards the saturation area of nonlinear activation function by accelerating gradient information flowing even in the saturation area in backpropagation. Based on this explanation, we propose a new technique for activation functions, {\em gradient acceleration in activation function (GAAF)}, that accelerates gradients to flow even in the saturation area. Then, input to the activation function can climb onto the saturation area which makes the network more robust because the model converges on a flat region. Experiment results support our explanation of dropout and confirm that the proposed GAAF technique improves image classification performance with expected properties.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
101,419
2404.17288
ExcluIR: Exclusionary Neural Information Retrieval
Exclusion is an important and universal linguistic skill that humans use to express what they do not want. However, in information retrieval community, there is little research on exclusionary retrieval, where users express what they do not want in their queries. In this work, we investigate the scenario of exclusionary retrieval in document retrieval for the first time. We present ExcluIR, a set of resources for exclusionary retrieval, consisting of an evaluation benchmark and a training set for helping retrieval models to comprehend exclusionary queries. The evaluation benchmark includes 3,452 high-quality exclusionary queries, each of which has been manually annotated. The training set contains 70,293 exclusionary queries, each paired with a positive document and a negative document. We conduct detailed experiments and analyses, obtaining three main observations: (1) Existing retrieval models with different architectures struggle to effectively comprehend exclusionary queries; (2) Although integrating our training data can improve the performance of retrieval models on exclusionary retrieval, there still exists a gap compared to human performance; (3) Generative retrieval models have a natural advantage in handling exclusionary queries. To facilitate future research on exclusionary retrieval, we share the benchmark and evaluation scripts on \url{https://github.com/zwh-sdu/ExcluIR}.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
449,809
2003.08376
Inverting the Pose Forecasting Pipeline with SPF2: Sequential Pointcloud Forecasting for Sequential Pose Forecasting
Many autonomous systems forecast aspects of the future in order to aid decision-making. For example, self-driving vehicles and robotic manipulation systems often forecast future object poses by first detecting and tracking objects. However, this detect-then-forecast pipeline is expensive to scale, as pose forecasting algorithms typically require labeled sequences of object poses, which are costly to obtain in 3D space. Can we scale performance without requiring additional labels? We hypothesize yes, and propose inverting the detect-then-forecast pipeline. Instead of detecting, tracking and then forecasting the objects, we propose to first forecast 3D sensor data (e.g., point clouds with $100$k points) and then detect/track objects on the predicted point cloud sequences to obtain future poses, i.e., a forecast-then-detect pipeline. This inversion makes it less expensive to scale pose forecasting, as the sensor data forecasting task requires no labels. Part of this work's focus is on the challenging first step -- Sequential Pointcloud Forecasting (SPF), for which we also propose an effective approach, SPFNet. To compare our forecast-then-detect pipeline relative to the detect-then-forecast pipeline, we propose an evaluation procedure and two metrics. Through experiments on a robotic manipulation dataset and two driving datasets, we show that SPFNet is effective for the SPF task, our forecast-then-detect pipeline outperforms the detect-then-forecast approaches to which we compared, and that pose forecasting performance improves with the addition of unlabeled data.
false
false
false
false
true
false
true
true
false
false
false
true
false
false
true
false
false
false
168,719
1905.04232
Automatic Programming of Cellular Automata and Artificial Neural Networks Guided by Philosophy
Many computer models such as cellular automata and artificial neural networks have been developed and successfully applied. However, in some cases, these models might be restrictive on the possible solutions or their solutions might be difficult to interpret. To overcome this problem, we outline a new approach, the so-called allagmatic method, that automatically programs and executes models with as little limitations as possible while maintaining human interpretability. Earlier we described a metamodel and its building blocks according to the philosophical concepts of structure (spatial dimension) and operation (temporal dimension). They are entity, milieu, and update function that together abstractly describe cellular automata, artificial neural networks, and possibly any kind of computer model. By automatically combining these building blocks in an evolutionary computation, interpretability might be increased by the relationship to the metamodel, and models might be translated into more interpretable models via the metamodel. We propose generic and object-oriented programming to implement the entities and their milieus as dynamic and generic arrays and the update function as a method. We show two experiments where a simple cellular automaton and an artificial neural network are automatically programmed, compiled, and executed. A target state is successfully evolved and learned in the cellular automaton and artificial neural network, respectively. We conclude that the allagmatic method can create and execute cellular automaton and artificial neural network models in an automated manner with the guidance of philosophy.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
true
false
true
130,409
2404.18327
MultiMAE-DER: Multimodal Masked Autoencoder for Dynamic Emotion Recognition
This paper presents a novel approach to processing multimodal data for dynamic emotion recognition, named as the Multimodal Masked Autoencoder for Dynamic Emotion Recognition (MultiMAE-DER). The MultiMAE-DER leverages the closely correlated representation information within spatiotemporal sequences across visual and audio modalities. By utilizing a pre-trained masked autoencoder model, the MultiMAEDER is accomplished through simple, straightforward finetuning. The performance of the MultiMAE-DER is enhanced by optimizing six fusion strategies for multimodal input sequences. These strategies address dynamic feature correlations within cross-domain data across spatial, temporal, and spatiotemporal sequences. In comparison to state-of-the-art multimodal supervised learning models for dynamic emotion recognition, MultiMAE-DER enhances the weighted average recall (WAR) by 4.41% on the RAVDESS dataset and by 2.06% on the CREMAD. Furthermore, when compared with the state-of-the-art model of multimodal self-supervised learning, MultiMAE-DER achieves a 1.86% higher WAR on the IEMOCAP dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
450,221
1805.09639
Online Regularized Nonlinear Acceleration
Regularized nonlinear acceleration (RNA) estimates the minimum of a function by post-processing iterates from an algorithm such as the gradient method. It can be seen as a regularized version of Anderson acceleration, a classical acceleration scheme from numerical analysis. The new scheme provably improves the rate of convergence of fixed step gradient descent, and its empirical performance is comparable to that of quasi-Newton methods. However, RNA cannot accelerate faster multistep algorithms like Nesterov's method and often diverges in this context. Here, we adapt RNA to overcome these issues, so that our scheme can be used on fast algorithms such as gradient methods with momentum. We show optimal complexity bounds for quadratics and asymptotically optimal rates on general convex minimization problems. Moreover, this new scheme works online, i.e., extrapolated solution estimates can be reinjected at each iteration, significantly improving numerical performance over classical accelerated methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
98,464
2312.01045
PROFL: A Privacy-Preserving Federated Learning Method with Stringent Defense Against Poisoning Attacks
Federated Learning (FL) faces two major issues: privacy leakage and poisoning attacks, which may seriously undermine the reliability and security of the system. Overcoming them simultaneously poses a great challenge. This is because privacy protection policies prohibit access to users' local gradients to avoid privacy leakage, while Byzantine-robust methods necessitate access to these gradients to defend against poisoning attacks. To address these problems, we propose a novel privacy-preserving Byzantine-robust FL framework PROFL. PROFL is based on the two-trapdoor additional homomorphic encryption algorithm and blinding techniques to ensure the data privacy of the entire FL process. During the defense process, PROFL first utilize secure Multi-Krum algorithm to remove malicious gradients at the user level. Then, according to the Pauta criterion, we innovatively propose a statistic-based privacy-preserving defense algorithm to eliminate outlier interference at the feature level and resist impersonation poisoning attacks with stronger concealment. Detailed theoretical analysis proves the security and efficiency of the proposed method. We conducted extensive experiments on two benchmark datasets, and PROFL improved accuracy by 39% to 75% across different attack settings compared to similar privacy-preserving robust methods, demonstrating its significant advantage in robustness.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
false
412,284
2110.05803
SDWNet: A Straight Dilated Network with Wavelet Transformation for Image Deblurring
Image deblurring is a classical computer vision problem that aims to recover a sharp image from a blurred image. To solve this problem, existing methods apply the Encode-Decode architecture to design the complex networks to make a good performance. However, most of these methods use repeated up-sampling and down-sampling structures to expand the receptive field, which results in texture information loss during the sampling process and some of them design the multiple stages that lead to difficulties with convergence. Therefore, our model uses dilated convolution to enable the obtainment of the large receptive field with high spatial resolution. Through making full use of the different receptive fields, our method can achieve better performance. On this basis, we reduce the number of up-sampling and down-sampling and design a simple network structure. Besides, we propose a novel module using the wavelet transform, which effectively helps the network to recover clear high-frequency texture details. Qualitative and quantitative evaluations of real and synthetic datasets show that our deblurring method is comparable to existing algorithms in terms of performance with much lower training requirements. The source code and pre-trained models are available at https://github.com/FlyEgle/SDWNet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
260,409
1509.02768
Joint Relay and Jammer Selection Improves the Physical Layer Security in the Face of CSI Feedback Delays
We enhance the physical-layer security (PLS) of amplify-and-forward relaying networks with the aid of joint relay and jammer selection (JRJS), despite the deliterious effect of channel state information (CSI) feedback delays. Furthermore, we conceive a new outage-based characterization approach for the JRJS scheme. The traditional best relay selection (TBRS) is also considered as a benchmark. We first derive closed-form expressions of both the connection outage probability (COP) and of the secrecy outage probability (SOP) for both the TBRS and JRJS schemes. Then, a reliable-and-secure connection probability (RSCP) is defined and analyzed for characterizing the effect of the correlation between the COP and SOP introduced by the corporate source-relay link. The reliability-security ratio (RSR) is introduced for characterizing the relationship between the reliability and security through the asymptotic analysis. Moreover, the concept of effective secrecy throughput is defined as the product of the secrecy rate and of the RSCP for the sake of characterizing the overall efficiency of the system, as determined by the transmit SNR, secrecy codeword rate and the power sharing ratio between the relay and jammer. The impact of the direct source-eavesdropper link and additional performance comparisons with respect to other related selection schemes are further included. Our numerical results show that the JRJS scheme outperforms the TBRS method both in terms of the RSCP as well as in terms of its effective secrecy throughput, but it is more sensitive to the feedback delays. Increasing the transmit SNR will not always improve the overall throughput. Moreover, the RSR results demonstrate that upon reducing the CSI feedback delays, the reliability improves more substantially than the security degrades, implying an overall improvement in terms of the security-reliability tradeoff.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
46,766
1909.09149
Timage -- A Robust Time Series Classification Pipeline
Time series are series of values ordered by time. This kind of data can be found in many real world settings. Classifying time series is a difficult task and an active area of research. This paper investigates the use of transfer learning in Deep Neural Networks and a 2D representation of time series known as Recurrence Plots. In order to utilize the research done in the area of image classification, where Deep Neural Networks have achieved very good results, we use a Residual Neural Networks architecture known as ResNet. As preprocessing of time series is a major part of every time series classification pipeline, the method proposed simplifies this step and requires only few parameters. For the first time we propose a method for multi time series classification: Training a single network to classify all datasets in the archive with one network. We are among the first to evaluate the method on the latest 2018 release of the UCR archive, a well established time series classification benchmarking dataset.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
146,168
2208.09910
Revisiting Weak-to-Strong Consistency in Semi-Supervised Semantic Segmentation
In this work, we revisit the weak-to-strong consistency framework, popularized by FixMatch from semi-supervised classification, where the prediction of a weakly perturbed image serves as supervision for its strongly perturbed version. Intriguingly, we observe that such a simple pipeline already achieves competitive results against recent advanced works, when transferred to our segmentation scenario. Its success heavily relies on the manual design of strong data augmentations, however, which may be limited and inadequate to explore a broader perturbation space. Motivated by this, we propose an auxiliary feature perturbation stream as a supplement, leading to an expanded perturbation space. On the other, to sufficiently probe original image-level augmentations, we present a dual-stream perturbation technique, enabling two strong views to be simultaneously guided by a common weak view. Consequently, our overall Unified Dual-Stream Perturbations approach (UniMatch) surpasses all existing methods significantly across all evaluation protocols on the Pascal, Cityscapes, and COCO benchmarks. Its superiority is also demonstrated in remote sensing interpretation and medical image analysis. We hope our reproduced FixMatch and our results can inspire more future works. Code and logs are available at https://github.com/LiheYoung/UniMatch.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
313,882
0809.3731
Uncertainty Relations for Shift-Invariant Analog Signals
The past several years have witnessed a surge of research investigating various aspects of sparse representations and compressed sensing. Most of this work has focused on the finite-dimensional setting in which the goal is to decompose a finite-length vector into a given finite dictionary. Underlying many of these results is the conceptual notion of an uncertainty principle: a signal cannot be sparsely represented in two different bases. Here, we extend these ideas and results to the analog, infinite-dimensional setting by considering signals that lie in a finitely-generated shift-invariant (SI) space. This class of signals is rich enough to include many interesting special cases such as multiband signals and splines. By adapting the notion of coherence defined for finite dictionaries to infinite SI representations, we develop an uncertainty principle similar in spirit to its finite counterpart. We demonstrate tightness of our bound by considering a bandlimited lowpass train that achieves the uncertainty principle. Building upon these results and similar work in the finite setting, we show how to find a sparse decomposition in an overcomplete dictionary by solving a convex optimization problem. The distinguishing feature of our approach is the fact that even though the problem is defined over an infinite domain with infinitely many variables and constraints, under certain conditions on the dictionary spectrum our algorithm can find the sparsest representation by solving a finite-dimensional problem.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
2,395
1403.1177
Effects of temporal correlations on cascades: Threshold models on temporal networks
A person's decision to adopt an idea or product is often driven by the decisions of peers, mediated through a network of social ties. A common way of modeling adoption dynamics is to use threshold models, where a node may become an adopter given a high enough rate of contacts with adopted neighbors. We study the dynamics of threshold models that take both the network topology and the timings of contacts into account, using empirical contact sequences as substrates. The models are designed such that adoption is driven by the number of contacts with different adopted neighbors within a chosen time. We find that while some networks support cascades leading to network-level adoption, some do not: the propagation of adoption depends on several factors from the frequency of contacts to burstiness and timing correlations of contact sequences. More specifically, burstiness is seen to suppress cascades sizes when compared to randomised contact timings, while timing correlations between contacts on adjacent links facilitate cascades.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
31,362
1811.06773
A Novel Approach to Sparse Inverse Covariance Estimation Using Transform Domain Updates and Exponentially Adaptive Thresholding
Sparse Inverse Covariance Estimation (SICE) is useful in many practical data analyses. Recovering the connectivity, non-connectivity graph of covariates is classified amongst the most important data mining and learning problems. In this paper, we introduce a novel SICE approach using adaptive thresholding. Our method is based on updates in a transformed domain of the desired matrix and exponentially decaying adaptive thresholding in the main domain (Inverse Covariance matrix domain). In addition to the proposed algorithm, the convergence analysis is also provided. In the Numerical Experiments Section, we show that the proposed method outperforms state-of-the-art methods in terms of accuracy.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
113,597
2011.06983
A Secure Distributed Ledger for Transactive Energy: The Electron Volt Exchange (EVE) Blockchain
The adoption of blockchain for Transactive Energy has gained significant momentum as it allows mutually non-trusting agents to trade energy services in a trustless energy market. Research to date has assumed that the built-in Byzantine Fault Tolerance in recording transactions in a ledger is sufficient to ensure integrity. Such work must be extended to address security gaps including random bilateral transactions that do not guarantee reliable and efficient market operation, and market participants having incentives to cheat when reporting actual production/consumption figures. Work herein introduces the Electron Volt Exchange framework with the following characteristics: 1) a distributed protocol for pricing and scheduling prosumers' production/consumption while keeping constraints and bids private, and 2) a distributed algorithm to prevent theft that verifies prosumers' compliance to scheduled transactions using information from grid sensors (such as smart meters) and mitigates the impact of false data injection attacks. Flexibility and robustness of the approach are demonstrated through simulation and implementation using Hyperledger Fabric.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
206,403
1501.01811
Is "Compressed Sensing" compressive? Can it beat the Nyquist Sampling Approach?
Data compression capability of "Compressed sensing (sampling)" in signal discretization is numerically evaluated and found to be far from the theoretical upper bound defined by signal sparsity. It is shown that, for the cases when ordinary sampling with subsequent data compression is prohibitive, there is at least one more efficient, in terms of data compression capability, and more simple and intuitive alternative to Compressed sensing: random sparse sampling and restoration of image band-limited approximations based on energy compaction capability of transforms. It is also shown that assertions that "Compressed sensing" can beat the Nyquist sampling approach are rooted in misinterpretation of the sampling theory.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
39,125
2410.23356
Sequential Order-Robust Mamba for Time Series Forecasting
Mamba has recently emerged as a promising alternative to Transformers, offering near-linear complexity in processing sequential data. However, while channels in time series (TS) data have no specific order in general, recent studies have adopted Mamba to capture channel dependencies (CD) in TS, introducing a sequential order bias. To address this issue, we propose SOR-Mamba, a TS forecasting method that 1) incorporates a regularization strategy to minimize the discrepancy between two embedding vectors generated from data with reversed channel orders, thereby enhancing robustness to channel order, and 2) eliminates the 1D-convolution originally designed to capture local information in sequential data. Furthermore, we introduce channel correlation modeling (CCM), a pretraining task aimed at preserving correlations between channels from the data space to the latent space in order to enhance the ability to capture CD. Extensive experiments demonstrate the efficacy of the proposed method across standard and transfer learning scenarios. Code is available at https://github.com/seunghan96/SOR-Mamba.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
504,011