id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2312.08538 | Contractive error feedback for gradient compression | On-device memory concerns in distributed deep learning have become severe due to (i) the growth of model size in multi-GPU training, and (ii) the wide adoption of deep neural networks for federated learning on IoT devices which have limited storage. In such settings, communication efficient optimization methods are attractive alternatives, however they still struggle with memory issues. To tackle these challenges, we propose an communication efficient method called contractive error feedback (ConEF). As opposed to SGD with error-feedback (EFSGD) that inefficiently manages memory, ConEF obtains the sweet spot of convergence and memory usage, and achieves communication efficiency by leveraging biased and all-reducable gradient compression. We empirically validate ConEF on various learning tasks that include image classification, language modeling, and machine translation and observe that ConEF saves 80\% - 90\% of the extra memory in EFSGD with almost no loss on test performance, while also achieving 1.3x - 5x speedup of SGD. Through our work, we also demonstrate the feasibility and convergence of ConEF to clear up the theoretical barrier of integrating ConEF to popular memory efficient frameworks such as ZeRO-3. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 415,335 |
1909.06480 | An Alert-Generation Framework for Improving Resiliency in
Human-Supervised, Multi-Agent Teams | Human-supervision in multi-agent teams is a critical requirement to ensure that the decision-maker's risk preferences are utilized to assign tasks to robots. In stressful complex missions that pose risk to human health and life, such as humanitarian-assistance and disaster-relief missions, human mistakes or delays in tasking robots can adversely affect the mission. To assist human decision making in such missions, we present an alert-generation framework capable of detecting various modes of potential failure or performance degradation. We demonstrate that our framework, based on state machine simulation and formal methods, offers probabilistic modeling to estimate the likelihood of unfavorable events. We introduce smart simulation that offers a computationally-efficient way of detecting low-probability situations compared to standard Monte-Carlo simulations. Moreover, for certain class of problems, our inference-based method can provide guarantees on correctly detecting task failures. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | true | false | false | false | 145,384 |
2412.05214 | AI's assigned gender affects human-AI cooperation | Cooperation between humans and machines is increasingly vital as artificial intelligence (AI) becomes more integrated into daily life. Research indicates that people are often less willing to cooperate with AI agents than with humans, more readily exploiting AI for personal gain. While prior studies have shown that giving AI agents human-like features influences people's cooperation with them, the impact of AI's assigned gender remains underexplored. This study investigates how human cooperation varies based on gender labels assigned to AI agents with which they interact. In the Prisoner's Dilemma game, 402 participants interacted with partners labelled as AI (bot) or humans. The partners were also labelled male, female, non-binary, or gender-neutral. Results revealed that participants tended to exploit female-labelled and distrust male-labelled AI agents more than their human counterparts, reflecting gender biases similar to those in human-human interactions. These findings highlight the significance of gender biases in human-AI interactions that must be considered in future policy, design of interactive AI systems, and regulation of their use. | true | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | true | 514,738 |
2411.16832 | Edit Away and My Face Will not Stay: Personal Biometric Defense against
Malicious Generative Editing | Recent advancements in diffusion models have made generative image editing more accessible, enabling creative edits but raising ethical concerns, particularly regarding malicious edits to human portraits that threaten privacy and identity security. Existing protection methods primarily rely on adversarial perturbations to nullify edits but often fail against diverse editing requests. We propose FaceLock, a novel approach to portrait protection that optimizes adversarial perturbations to destroy or significantly alter biometric information, rendering edited outputs biometrically unrecognizable. FaceLock integrates facial recognition and visual perception into perturbation optimization to provide robust protection against various editing attempts. We also highlight flaws in commonly used evaluation metrics and reveal how they can be manipulated, emphasizing the need for reliable assessments of protection. Experiments show FaceLock outperforms baselines in defending against malicious edits and is robust against purification techniques. Ablation studies confirm its stability and broad applicability across diffusion-based editing algorithms. Our work advances biometric defense and sets the foundation for privacy-preserving practices in image editing. The code is available at: https://github.com/taco-group/FaceLock. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 511,213 |
2312.00250 | Advancements and Trends in Ultra-High-Resolution Image Processing: An
Overview | Currently, to further improve visual enjoyment, Ultra-High-Definition (UHD) images are catching wide attention. Here, UHD images are usually referred to as having a resolution greater than or equal to $3840 \times 2160$. However, since the imaging equipment is subject to environmental noise or equipment jitter, UHD images are prone to contrast degradation, blurring, low dynamic range, etc. To address these issues, a large number of algorithms for UHD image enhancement have been proposed. In this paper, we introduce the current state of UHD image enhancement from two perspectives, one is the application field and the other is the technology. In addition, we briefly explore its trends. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 411,973 |
2305.09257 | A new node-shift encoding representation for the travelling salesman
problem | This paper presents a new genetic algorithm encoding representation to solve the travelling salesman problem. To assess the performance of the proposed chromosome structure, we compare it with state-of-the-art encoding representations. For that purpose, we use 14 benchmarks of different sizes taken from TSPLIB. Finally, after conducting the experimental study, we report the obtained results and draw our conclusion. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | 364,577 |
2101.01897 | Performance Analysis and Optimization of Bidirectional Overlay Cognitive
Radio Networks with Hybrid-SWIPT | This paper considers a cooperative cognitive radio network with two primary users (PUs) and two secondary users (SUs) that enables two-way communications of primary and secondary systems in conjunction with non-linear energy harvesting based simultaneous wireless information and power transfer (SWIPT). With the considered network, SUs are able to realize their communications over the licensed spectrum while extending relay assistance to the PUs. The overall bidirectional end-to-end transmission takes place in four phases, which include both energy harvesting (EH) and information transfer. A non-linear energy harvester with a hybrid SWIPT scheme is adopted in which both power-splitting and time-switching EH techniques are used. The SUs aid in relay cooperation by performing an amplify-and-forward operation, whereas selection combining technique is adopted at the PUs to extract the intended signal from multiple received signals broadcasted by the SUs. Accurate outage probability expressions for the primary and secondary links are derived under the Nakagami-$m$ fading environment. Further, the system behavior is analyzed with respect to achievable system throughput and energy efficiency. Since the performance of the considered system is strongly affected by the spectrum sharing factor and hybrid SWIPT parameters, particle swarm optimization is implemented to optimize the system parameters so as to maximize the system throughput and energy efficiency. Simulation results are provided to corroborate the performance analysis and give useful insights into the system behavior concerning various system/channel parameters. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 214,480 |
2407.02516 | EditFollower: Tunable Car Following Models for Customizable Adaptive
Cruise Control Systems | In the realm of driving technologies, fully autonomous vehicles have not been widely adopted yet, making advanced driver assistance systems (ADAS) crucial for enhancing driving experiences. Adaptive Cruise Control (ACC) emerges as a pivotal component of ADAS. However, current ACC systems often employ fixed settings, failing to intuitively capture drivers' social preferences and leading to potential function disengagement. To overcome these limitations, we propose the Editable Behavior Generation (EBG) model, a data-driven car-following model that allows for adjusting driving discourtesy levels. The framework integrates diverse courtesy calculation methods into long short-term memory (LSTM) and Transformer architectures, offering a comprehensive approach to capture nuanced driving dynamics. By integrating various discourtesy values during the training process, our model generates realistic agent trajectories with different levels of courtesy in car-following behavior. Experimental results on the HighD and Waymo datasets showcase a reduction in Mean Squared Error (MSE) of spacing and MSE of speed compared to baselines, establishing style controllability. To the best of our knowledge, this work represents the first data-driven car-following model capable of dynamically adjusting discourtesy levels. Our model provides valuable insights for the development of ACC systems that take into account drivers' social preferences. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 469,773 |
2310.02207 | Language Models Represent Space and Time | The capabilities of large language models (LLMs) have sparked debate over whether such systems just learn an enormous collection of superficial statistics or a set of more coherent and grounded representations that reflect the real world. We find evidence for the latter by analyzing the learned representations of three spatial datasets (world, US, NYC places) and three temporal datasets (historical figures, artworks, news headlines) in the Llama-2 family of models. We discover that LLMs learn linear representations of space and time across multiple scales. These representations are robust to prompting variations and unified across different entity types (e.g. cities and landmarks). In addition, we identify individual "space neurons" and "time neurons" that reliably encode spatial and temporal coordinates. While further investigation is needed, our results suggest modern LLMs learn rich spatiotemporal representations of the real world and possess basic ingredients of a world model. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 396,743 |
2301.10803 | Evaluating Probabilistic Classifiers: The Triptych | Probability forecasts for binary outcomes, often referred to as probabilistic classifiers or confidence scores, are ubiquitous in science and society, and methods for evaluating and comparing them are in great demand. We propose and study a triptych of diagnostic graphics that focus on distinct and complementary aspects of forecast performance: The reliability diagram addresses calibration, the receiver operating characteristic (ROC) curve diagnoses discrimination ability, and the Murphy diagram visualizes overall predictive performance and value. A Murphy curve shows a forecast's mean elementary scores, including the widely used misclassification rate, and the area under a Murphy curve equals the mean Brier score. For a calibrated forecast, the reliability curve lies on the diagonal, and for competing calibrated forecasts, the ROC and Murphy curves share the same number of crossing points. We invoke the recently developed CORP (Consistent, Optimally binned, Reproducible, and Pool-Adjacent-Violators (PAV) algorithm based) approach to craft reliability diagrams and decompose a mean score into miscalibration (MCB), discrimination (DSC), and uncertainty (UNC) components. Plots of the DSC measure of discrimination ability versus the calibration metric MCB visualize classifier performance across multiple competitors. The proposed tools are illustrated in empirical examples from astrophysics, economics, and social science. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 341,910 |
1204.0156 | Ranking Tweets Considering Trust and Relevance | The increasing popularity of Twitter and other microblogs makes improved trustworthiness and relevance assessment of microblogs evermore important. We propose a method of ranking of tweets considering trustworthiness and content based popularity. The analysis of trustworthiness and popularity exploits the implicit relationships between the tweets. We model microblog ecosystem as a three-layer graph consisting of : (i) users (ii) tweets and (iii) web pages. We propose to derive trust and popularity scores of entities in these three layers, and propagate the scores to tweets considering the inter-layer relations. Our preliminary evaluations show improvement in precision and trustworthiness over the baseline methods and acceptable computation timings. | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 15,213 |
2003.13197 | Cross-Domain Document Object Detection: Benchmark Suite and Method | Decomposing images of document pages into high-level semantic regions (e.g., figures, tables, paragraphs), document object detection (DOD) is fundamental for downstream tasks like intelligent document editing and understanding. DOD remains a challenging problem as document objects vary significantly in layout, size, aspect ratio, texture, etc. An additional challenge arises in practice because large labeled training datasets are only available for domains that differ from the target domain. We investigate cross-domain DOD, where the goal is to learn a detector for the target domain using labeled data from the source domain and only unlabeled data from the target domain. Documents from the two domains may vary significantly in layout, language, and genre. We establish a benchmark suite consisting of different types of PDF document datasets that can be utilized for cross-domain DOD model training and evaluation. For each dataset, we provide the page images, bounding box annotations, PDF files, and the rendering layers extracted from the PDF files. Moreover, we propose a novel cross-domain DOD model which builds upon the standard detection model and addresses domain shifts by incorporating three novel alignment modules: Feature Pyramid Alignment (FPA) module, Region Alignment (RA) module and Rendering Layer alignment (RLA) module. Extensive experiments on the benchmark suite substantiate the efficacy of the three proposed modules and the proposed method significantly outperforms the baseline methods. The project page is at \url{https://github.com/kailigo/cddod}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 170,134 |
1506.00231 | Channel Equalization and Beamforming for Quaternion-Valued Wireless
Communication Systems | Quaternion-valued wireless communication systems have been studied in the past. Although progress has been made in this promising area, a crucial missing link is lack of effective and efficient quaternion-valued signal processing algorithms for channel equalization and beamforming. With most recent developments in quaternion-valued signal processing, in this work, we fill the gap to solve the problem by studying two quaternion-valued adaptive algorithms: one is the reference signal based quaternion-valued least mean square (QLMS) algorithm and the other one is the quaternion-valued constant modulus algorithm (QCMA). The quaternion-valued Wiener solution for possible block-based calculation is also derived. Simulation results are provided to show the working of the system. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 43,638 |
1905.10777 | Cross-Resolution Face Recognition via Prior-Aided Face Hallucination and
Residual Knowledge Distillation | Recent deep learning based face recognition methods have achieved great performance, but it still remains challenging to recognize very low-resolution query face like 28x28 pixels when CCTV camera is far from the captured subject. Such face with very low-resolution is totally out of detail information of the face identity compared to normal resolution in a gallery and hard to find corresponding faces therein. To this end, we propose a Resolution Invariant Model (RIM) for addressing such cross-resolution face recognition problems, with three distinct novelties. First, RIM is a novel and unified deep architecture, containing a Face Hallucination sub-Net (FHN) and a Heterogeneous Recognition sub-Net (HRN), which are jointly learned end to end. Second, FHN is a well-designed tri-path Generative Adversarial Network (GAN) which simultaneously perceives facial structure and geometry prior information, i.e. landmark heatmaps and parsing maps, incorporated with an unsupervised cross-domain adversarial training strategy to super-resolve very low-resolution query image to its 8x larger ones without requiring them to be well aligned. Third, HRN is a generic Convolutional Neural Network (CNN) for heterogeneous face recognition with our proposed residual knowledge distillation strategy for learning discriminative yet generalized feature representation. Quantitative and qualitative experiments on several benchmarks demonstrate the superiority of the proposed model over the state-of-the-arts. Codes and models will be released upon acceptance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 132,192 |
2104.05367 | Visiting the Invisible: Layer-by-Layer Completed Scene Decomposition | Existing scene understanding systems mainly focus on recognizing the visible parts of a scene, ignoring the intact appearance of physical objects in the real-world. Concurrently, image completion has aimed to create plausible appearance for the invisible regions, but requires a manual mask as input. In this work, we propose a higher-level scene understanding system to tackle both visible and invisible parts of objects and backgrounds in a given scene. Particularly, we built a system to decompose a scene into individual objects, infer their underlying occlusion relationships, and even automatically learn which parts of the objects are occluded that need to be completed. In order to disentangle the occluded relationships of all objects in a complex scene, we use the fact that the front object without being occluded is easy to be identified, detected, and segmented. Our system interleaves the two tasks of instance segmentation and scene completion through multiple iterations, solving for objects layer-by-layer. We first provide a thorough experiment using a new realistically rendered dataset with ground-truths for all invisible regions. To bridge the domain gap to real imagery where ground-truths are unavailable, we then train another model with the pseudo-ground-truths generated from our trained synthesis model. We demonstrate results on a wide variety of datasets and show significant improvement over the state-of-the-art. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 229,701 |
2007.06507 | Industry Adoption Scenarios for Authoritative Data Stores using the ISDA
Common Domain Model | In this paper we explore opportunities for the post-trade industry to standardize and simplify in order to significantly increase efficiency and reduce costs. We start by summarizing relevant industry problems (inconsistent processes, inconsistent data and duplicated data) and then present the corresponding potential industry solutions (process standardization, data standardization and authoritative data stores). This includes transitioning to the International Swaps and Derivatives Association Common Domain Model (CDM) as a standard set of digital representations for the business events and processes throughout the life cycle of a trade. We then explore how financial market infrastructures could operate authoritative data stores that make CDM business events available to broker-dealers, considering both traditional centralized models and potential decentralized models. For both types of model, there are many possible adoption scenarios (depending on each broker-dealer's degree of integration with the authoritative data store and usage of the CDM), and we identify some of the key scenarios. | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | true | false | 187,031 |
2406.06105 | The Evolution of Applications, Hardware Design, and Channel Modeling for
Terahertz (THz) Band Communications and Sensing: Ready for 6G? | For decades, the terahertz (THz) frequency band had been primarily explored in the context of radar, imaging, and spectroscopy, where multi-gigahertz (GHz) and even THz-wide channels and the properties of terahertz photons offered attractive target accuracy, resolution, and classification capabilities. Meanwhile, the exploitation of the terahertz band for wireless communication had originally been limited due to several reasons, including (i) no immediate need for such high data rates available via terahertz bands and (ii) challenges in designing sufficiently high power terahertz systems at reasonable cost and efficiency, leading to what was often referred to as "the terahertz gap". This roadmap paper first reviews the evolution of the hardware design approaches for terahertz systems, including electronic, photonic, and plasmonic approaches, and the understanding of the terahertz channel itself, in diverse scenarios, ranging from common indoors and outdoors scenarios to intra-body and outer-space environments. The article then summarizes the lessons learned during this multi-decade process and the cutting-edge state-of-the-art findings, including novel methods to quantify power efficiency, which will become more important in making design choices. Finally, the manuscript presents the authors' perspective and insights on how the evolution of terahertz systems design will continue toward enabling efficient terahertz communications and sensing solutions as an integral part of next-generation wireless systems. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 462,441 |
2411.11222 | The Sound of Water: Inferring Physical Properties from Pouring Liquids | We study the connection between audio-visual observations and the underlying physics of a mundane yet intriguing everyday activity: pouring liquids. Given only the sound of liquid pouring into a container, our objective is to automatically infer physical properties such as the liquid level, the shape and size of the container, the pouring rate and the time to fill. To this end, we: (i) show in theory that these properties can be determined from the fundamental frequency (pitch); (ii) train a pitch detection model with supervision from simulated data and visual data with a physics-inspired objective; (iii) introduce a new large dataset of real pouring videos for a systematic study; (iv) show that the trained model can indeed infer these physical properties for real data; and finally, (v) we demonstrate strong generalization to various container shapes, other datasets, and in-the-wild YouTube videos. Our work presents a keen understanding of a narrow yet rich problem at the intersection of acoustics, physics, and learning. It opens up applications to enhance multisensory perception in robotic pouring. | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 508,965 |
2303.18132 | A Desynchronization-Based Countermeasure Against Side-Channel Analysis
of Neural Networks | Model extraction attacks have been widely applied, which can normally be used to recover confidential parameters of neural networks for multiple layers. Recently, side-channel analysis of neural networks allows parameter extraction even for networks with several multiple deep layers with high effectiveness. It is therefore of interest to implement a certain level of protection against these attacks. In this paper, we propose a desynchronization-based countermeasure that makes the timing analysis of activation functions harder. We analyze the timing properties of several activation functions and design the desynchronization in a way that the dependency on the input and the activation type is hidden. We experimentally verify the effectiveness of the countermeasure on a 32-bit ARM Cortex-M4 microcontroller and employ a t-test to show the side-channel information leakage. The overhead ultimately depends on the number of neurons in the fully-connected layer, for example, in the case of 4096 neurons in VGG-19, the overheads are between 2.8% and 11%. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 355,474 |
2104.04546 | One-class Autoencoder Approach for Optimal Electrode Set-up
Identification in Wearable EEG Event Monitoring | A limiting factor towards the wide routine use of wearables devices for continuous healthcare monitoring is their cumbersome and obtrusive nature. This is particularly true for electroencephalography (EEG) recordings, which require the placement of multiple electrodes in contact with the scalp. In this work, we propose to identify the optimal wearable EEG electrode set-up, in terms of minimal number of electrodes, comfortable location and performance, for EEG-based event detection and monitoring. By relying on the demonstrated power of autoencoder (AE) networks to learn latent representations from high-dimensional data, our proposed strategy trains an AE architecture in a one-class classification setup with different electrode set-ups as input data. The resulting models are assessed using the F-score and the best set-up is chosen according to the established optimal criteria. Using alpha wave detection as use case, we demonstrate that the proposed method allows to detect an alpha state from an optimal set-up consisting of electrodes in the forehead and behind the ear, with an average F-score of 0.78. Our results suggest that a learning-based approach can be used to enable the design and implementation of optimized wearable devices for real-life healthcare monitoring. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 229,417 |
1412.6170 | Manycore processing of repeated k-NN queries over massive moving objects
observations | The ability to timely process significant amounts of continuously updated spatial data is mandatory for an increasing number of applications. In this paper we focus on a specific data-intensive problem concerning the repeated processing of huge amounts of k nearest neighbours (k-NN) queries over massive sets of moving objects, where the spatial extents of queries and the position of objects are continuously modified over time. In particular, we propose a novel hybrid CPU/GPU pipeline that significantly accelerate query processing thanks to a combination of ad-hoc data structures and non-trivial memory access patterns. To the best of our knowledge this is the first work that exploits GPUs to efficiently solve repeated k-NN queries over massive sets of continuously moving objects, even characterized by highly skewed spatial distributions. In comparison with state-of-the-art sequential CPU-based implementations, our method highlights significant speedups in the order of 10x-20x, depending on the datasets, even when considering cheap GPUs. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 38,599 |
2303.03922 | Structure Pretraining and Prompt Tuning for Knowledge Graph Transfer | Knowledge graphs (KG) are essential background knowledge providers in many tasks. When designing models for KG-related tasks, one of the key tasks is to devise the Knowledge Representation and Fusion (KRF) module that learns the representation of elements from KGs and fuses them with task representations. While due to the difference of KGs and perspectives to be considered during fusion across tasks, duplicate and ad hoc KRF modules design are conducted among tasks. In this paper, we propose a novel knowledge graph pretraining model KGTransformer that could serve as a uniform KRF module in diverse KG-related tasks. We pretrain KGTransformer with three self-supervised tasks with sampled sub-graphs as input. For utilization, we propose a general prompt-tuning mechanism regarding task data as a triple prompt to allow flexible interactions between task KGs and task data. We evaluate pretrained KGTransformer on three tasks, triple classification, zero-shot image classification, and question answering. KGTransformer consistently achieves better results than specifically designed task models. Through experiments, we justify that the pretrained KGTransformer could be used off the shelf as a general and effective KRF module across KG-related tasks. The code and datasets are available at https://github.com/zjukg/KGTransformer. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 349,894 |
1802.04416 | Neural Tensor Factorization | Neural collaborative filtering (NCF) and recurrent recommender systems (RRN) have been successful in modeling user-item relational data. However, they are also limited in their assumption of static or sequential modeling of relational data as they do not account for evolving users' preference over time as well as changes in the underlying factors that drive the change in user-item relationship over time. We address these limitations by proposing a Neural Tensor Factorization (NTF) model for predictive tasks on dynamic relational data. The NTF model generalizes conventional tensor factorization from two perspectives: First, it leverages the long short-term memory architecture to characterize the multi-dimensional temporal interactions on relational data. Second, it incorporates the multi-layer perceptron structure for learning the non-linearities between different latent factors. Our extensive experiments demonstrate the significant improvement in rating prediction and link prediction on dynamic relational data by our NTF model over both neural network based factorization models and other traditional methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 90,214 |
2205.07739 | The Role of Pseudo-labels in Self-training Linear Classifiers on
High-dimensional Gaussian Mixture Data | Self-training (ST) is a simple yet effective semi-supervised learning method. However, why and how ST improves generalization performance by using potentially erroneous pseudo-labels is still not well understood. To deepen the understanding of ST, we derive and analyze a sharp characterization of the behavior of iterative ST when training a linear classifier by minimizing the ridge-regularized convex loss on binary Gaussian mixtures, in the asymptotic limit where input dimension and data size diverge proportionally. The results show that ST improves generalization in different ways depending on the number of iterations. When the number of iterations is small, ST improves generalization performance by fitting the model to relatively reliable pseudo-labels and updating the model parameters by a large amount at each iteration. This suggests that ST works intuitively. On the other hand, with many iterations, ST can gradually improve the direction of the classification plane by updating the model parameters incrementally, using soft labels and small regularization. It is argued that this is because the small update of ST can extract information from the data in an almost noiseless way. However, in the presence of label imbalance, the generalization performance of ST underperforms supervised learning with true labels. To overcome this, two heuristics are proposed to enable ST to achieve nearly compatible performance with supervised learning even with significant label imbalance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 296,695 |
2305.11482 | Enhancing Personalized Dialogue Generation with Contrastive Latent
Variables: Combining Sparse and Dense Persona | The personalized dialogue explores the consistent relationship between dialogue generation and personality. Existing personalized dialogue agents model persona profiles from three resources: sparse or dense persona descriptions and dialogue histories. However, sparse structured persona attributes are explicit but uninformative, dense persona texts contain rich persona descriptions with much noise, and dialogue history query is both noisy and uninformative for persona modeling. In this work, we combine the advantages of the three resources to obtain a richer and more accurate persona. We design a Contrastive Latent Variable-based model (CLV) that clusters the dense persona descriptions into sparse categories, which are combined with the history query to generate personalized responses. Experimental results on Chinese and English datasets demonstrate our model's superiority in personalization. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 365,556 |
2404.06737 | Disguised Copyright Infringement of Latent Diffusion Models | Copyright infringement may occur when a generative model produces samples substantially similar to some copyrighted data that it had access to during the training phase. The notion of access usually refers to including copyrighted samples directly in the training dataset, which one may inspect to identify an infringement. We argue that such visual auditing largely overlooks a concealed copyright infringement, where one constructs a disguise that looks drastically different from the copyrighted sample yet still induces the effect of training Latent Diffusion Models on it. Such disguises only require indirect access to the copyrighted material and cannot be visually distinguished, thus easily circumventing the current auditing tools. In this paper, we provide a better understanding of such disguised copyright infringement by uncovering the disguises generation algorithm, the revelation of the disguises, and importantly, how to detect them to augment the existing toolbox. Additionally, we introduce a broader notion of acknowledgment for comprehending such indirect access. Our code is available at https://github.com/watml/disguised_copyright_infringement. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 445,581 |
2003.10834 | Palm-GAN: Generating Realistic Palmprint Images Using Total-Variation
Regularized GAN | Generating realistic palmprint (more generally biometric) images has always been an interesting and, at the same time, challenging problem. Classical statistical models fail to generate realistic-looking palmprint images, as they are not powerful enough to capture the complicated texture representation of palmprint images. In this work, we present a deep learning framework based on generative adversarial networks (GAN), which is able to generate realistic palmprint images. To help the model learn more realistic images, we proposed to add a suitable regularization to the loss function, which imposes the line connectivity of generated palmprint images. This is very desirable for palmprints, as the principal lines in palm are usually connected. We apply this framework to a popular palmprint databases, and generate images which look very realistic, and similar to the samples in this database. Through experimental results, we show that the generated palmprint images look very realistic, have a good diversity, and are able to capture different parts of the prior distribution. We also report the Frechet Inception distance (FID) of the proposed model, and show that our model is able to achieve really good quantitative performance in terms of FID score. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 169,449 |
1704.06178 | Exploring epoch-dependent stochastic residual networks | The recently proposed stochastic residual networks selectively activate or bypass the layers during training, based on independent stochastic choices, each of which following a probability distribution that is fixed in advance. In this paper we present a first exploration on the use of an epoch-dependent distribution, starting with a higher probability of bypassing deeper layers and then activating them more frequently as training progresses. Preliminary results are mixed, yet they show some potential of adding an epoch-dependent management of distributions, worth of further investigation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 72,131 |
1803.01090 | On Modular Training of Neural Acoustics-to-Word Model for LVCSR | End-to-end (E2E) automatic speech recognition (ASR) systems directly map acoustics to words using a unified model. Previous works mostly focus on E2E training a single model which integrates acoustic and language model into a whole. Although E2E training benefits from sequence modeling and simplified decoding pipelines, large amount of transcribed acoustic data is usually required, and traditional acoustic and language modelling techniques cannot be utilized. In this paper, a novel modular training framework of E2E ASR is proposed to separately train neural acoustic and language models during training stage, while still performing end-to-end inference in decoding stage. Here, an acoustics-to-phoneme model (A2P) and a phoneme-to-word model (P2W) are trained using acoustic data and text data respectively. A phone synchronous decoding (PSD) module is inserted between A2P and P2W to reduce sequence lengths without precision loss. Finally, modules are integrated into an acousticsto-word model (A2W) and jointly optimized using acoustic data to retain the advantage of sequence modeling. Experiments on a 300- hour Switchboard task show significant improvement over the direct A2W model. The efficiency in both training and decoding also benefits from the proposed method. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 91,810 |
2406.16062 | Towards Biologically Plausible Computing: A Comprehensive Comparison | Backpropagation is a cornerstone algorithm in training neural networks for supervised learning, which uses a gradient descent method to update network weights by minimizing the discrepancy between actual and desired outputs. Despite its pivotal role in propelling deep learning advancements, the biological plausibility of backpropagation is questioned due to its requirements for weight symmetry, global error computation, and dual-phase training. To address this long-standing challenge, many studies have endeavored to devise biologically plausible training algorithms. However, a fully biologically plausible algorithm for training multilayer neural networks remains elusive, and interpretations of biological plausibility vary among researchers. In this study, we establish criteria for biological plausibility that a desirable learning algorithm should meet. Using these criteria, we evaluate a range of existing algorithms considered to be biologically plausible, including Hebbian learning, spike-timing-dependent plasticity, feedback alignment, target propagation, predictive coding, forward-forward algorithm, perturbation learning, local losses, and energy-based learning. Additionally, we empirically evaluate these algorithms across diverse network architectures and datasets. We compare the feature representations learned by these algorithms with brain activity recorded by non-invasive devices under identical stimuli, aiming to identify which algorithm can most accurately replicate brain activity patterns. We are hopeful that this study could inspire the development of new biologically plausible algorithms for training multilayer networks, thereby fostering progress in both the fields of neuroscience and machine learning. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 466,981 |
2312.02372 | On the Trade-Off between Stability and Representational Capacity in
Graph Neural Networks | Analyzing the stability of graph neural networks (GNNs) under topological perturbations is key to understanding their transferability and the role of each architecture component. However, stability has been investigated only for particular architectures, questioning whether it holds for a broader spectrum of GNNs or only for a few instances. To answer this question, we study the stability of EdgeNet: a general GNN framework that unifies more than twenty solutions including the convolutional and attention-based classes, as well as graph isomorphism networks and hybrid architectures. We prove that all GNNs within the EdgeNet framework are stable to topological perturbations. By studying the effect of different EdgeNet categories on the stability, we show that GNNs with fewer degrees of freedom in their parameter space, linked to a lower representational capacity, are more stable. The key factor yielding this trade-off is the eigenvector misalignment between the EdgeNet parameter matrices and the graph shift operator. For example, graph convolutional neural networks that assign a single scalar per signal shift (hence, with a perfect alignment) are more stable than the more involved node or edge-varying counterparts. Extensive numerical results corroborate our theoretical findings and highlight the role of different architecture components in the trade-off. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 412,839 |
2210.05185 | Meta-Learning with Self-Improving Momentum Target | The idea of using a separately trained target model (or teacher) to improve the performance of the student model has been increasingly popular in various machine learning domains, and meta-learning is no exception; a recent discovery shows that utilizing task-wise target models can significantly boost the generalization performance. However, obtaining a target model for each task can be highly expensive, especially when the number of tasks for meta-learning is large. To tackle this issue, we propose a simple yet effective method, coined Self-improving Momentum Target (SiMT). SiMT generates the target model by adapting from the temporal ensemble of the meta-learner, i.e., the momentum network. This momentum network and its task-specific adaptations enjoy a favorable generalization performance, enabling self-improving of the meta-learner through knowledge distillation. Moreover, we found that perturbing parameters of the meta-learner, e.g., dropout, further stabilize this self-improving process by preventing fast convergence of the distillation loss during meta-training. Our experimental results demonstrate that SiMT brings a significant performance gain when combined with a wide range of meta-learning methods under various applications, including few-shot regression, few-shot classification, and meta-reinforcement learning. Code is available at https://github.com/jihoontack/SiMT. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 322,742 |
2301.01673 | Text sampling strategies for predicting missing bibliographic links | The paper proposes various strategies for sampling text data when performing automatic sentence classification for the purpose of detecting missing bibliographic links. We construct samples based on sentences as semantic units of the text and add their immediate context which consists of several neighboring sentences. We examine a number of sampling strategies that differ in context size and position. The experiment is carried out on the collection of STEM scientific papers. Including the context of sentences into samples improves the result of their classification. We automatically determine the optimal sampling strategy for a given text collection by implementing an ensemble voting when classifying the same data sampled in different ways. Sampling strategy taking into account the sentence context with hard voting procedure leads to the classification accuracy of 98% (F1-score). This method of detecting missing bibliographic links can be used in recommendation engines of applied intelligent information systems. | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 339,292 |
2204.06265 | Optimal Intermittent Particle Filter | The problem of the optimal allocation (in the expected mean square error sense) of a measurement budget for particle filtering is addressed. We propose three different optimal intermittent filters, whose optimality criteria depend on the information available at the time of decision making. For the first, the stochastic program filter, the measurement times are given by a policy that determines whether a measurement should be taken based on the measurements already acquired. The second, called the offline filter, determines all measurement times at once by solving a combinatorial optimization program before any measurement acquisition. For the third one, which we call online filter, each time a new measurement is received, the next measurement time is recomputed to take all the information that is then available into account. We prove that in terms of expected mean square error, the stochastic program filter outperforms the online filter, which itself outperforms the offline filter. However, these filters are generally intractable. For this reason, the filter estimate is approximated by a particle filter. Moreover, the mean square error is approximated using a Monte-Carlo approach, and different optimization algorithms are compared to approximately solve the combinatorial programs (a random trial algorithm, greedy forward and backward algorithms, a simulated annealing algorithm, and a genetic algorithm). Finally, the performance of the proposed methods is illustrated on two examples: a tumor motion model and a common benchmark for particle filtering. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 291,288 |
1802.07226 | Implicit Argument Prediction with Event Knowledge | Implicit arguments are not syntactically connected to their predicates, and are therefore hard to extract. Previous work has used models with large numbers of features, evaluated on very small datasets. We propose to train models for implicit argument prediction on a simple cloze task, for which data can be generated automatically at scale. This allows us to use a neural model, which draws on narrative coherence and entity salience for predictions. We show that our model has superior performance on both synthetic and natural data. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 90,853 |
cs/0701118 | Optimal Order of Decoding for Max-Min Fairness in $K$-User Memoryless
Interference Channels | A $K$-user memoryless interference channel is considered where each receiver sequentially decodes the data of a subset of transmitters before it decodes the data of the designated transmitter. Therefore, the data rate of each transmitter depends on (i) the subset of receivers which decode the data of that transmitter, (ii) the decoding order, employed at each of these receivers. In this paper, a greedy algorithm is developed to find the users which are decoded at each receiver and the corresponding decoding order such that the minimum rate of the users is maximized. It is proven that the proposed algorithm is optimal. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 540,076 |
2101.04001 | Automatic Polyp Segmentation using Fully Convolutional Neural Network | Colorectal cancer is one of fatal cancer worldwide. Colonoscopy is the standard treatment for examination, localization, and removal of colorectal polyps. However, it has been shown that the miss-rate of colorectal polyps during colonoscopy is between 6 to 27%. The use of an automated, accurate, and real-time polyp segmentation during colonoscopy examinations can help the clinicians to eliminate missing lesions and prevent further progression of colorectal cancer. The ``Medico automatic polyp segmentation challenge'' provides an opportunity to study polyp segmentation and build a fast segmentation model. The challenge organizers provide a Kvasir-SEG dataset to train the model. Then it is tested on a separate unseen dataset to validate the efficiency and speed of the segmentation model. The experiments demonstrate that the model trained on the Kvasir-SEG dataset and tested on an unseen dataset achieves a dice coefficient of 0.7801, mIoU of 0.6847, recall of 0.8077, and precision of 0.8126, demonstrating the generalization ability of our model. The model has achieved 80.60 FPS on the unseen dataset with an image resolution of $512 \times 512$. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 215,045 |
2502.10693 | Extremely Large Full Duplex MIMO for Simultaneous Downlink
Communications and Monostatic Sensing at Sub-THz Frequencies | The in-band Full Duplex (FD) technology is lately gaining attention as an enabler for the emerging paradigm of Integrated Sensing and Communications (ISAC), which envisions seamless integration of sensing mechanisms for unconnected entities into next generation wireless networks. In this paper, we present an FD Multiple-Input Multiple-Output (MIMO) system with extremely large antenna arrays at its transceiver module, which is optimized, considering two emerging analog beamforming architectures, for simultaneous DownLink (DL) communications and monostatic-type sensing intended at the sub-THz frequencies, with the latter operation relying on received reflections of the transmitted information-bearing signals. A novel optimization framework for the joint design of the analog and digital transmit beamforming, analog receive combining, and the digital canceler for the self-interference signal is devised with the objective to maximize the achievable DL rate, while meeting a predefined threshold for the position error bound for the unknown three-dimensional parameters of a passive target. Capitalizing on the distinctive features of the beamforming architectures with fully-connected networks of phase shifters and partially-connected arrays of metamaterials, two ISAC designs are presented. Our simulation results showcase the superiority of both proposed designs over state-of-the-art schemes, highlighting the role of various system parameters in the trade-off between the communication and sensing functionalities. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 534,002 |
2306.00917 | Vocabulary-free Image Classification | Recent advances in large vision-language models have revolutionized the image classification paradigm. Despite showing impressive zero-shot capabilities, a pre-defined set of categories, a.k.a. the vocabulary, is assumed at test time for composing the textual prompts. However, such assumption can be impractical when the semantic context is unknown and evolving. We thus formalize a novel task, termed as Vocabulary-free Image Classification (VIC), where we aim to assign to an input image a class that resides in an unconstrained language-induced semantic space, without the prerequisite of a known vocabulary. VIC is a challenging task as the semantic space is extremely large, containing millions of concepts, with hard-to-discriminate fine-grained categories. In this work, we first empirically verify that representing this semantic space by means of an external vision-language database is the most effective way to obtain semantically relevant content for classifying the image. We then propose Category Search from External Databases (CaSED), a method that exploits a pre-trained vision-language model and an external vision-language database to address VIC in a training-free manner. CaSED first extracts a set of candidate categories from captions retrieved from the database based on their semantic similarity to the image, and then assigns to the image the best matching candidate category according to the same vision-language model. Experiments on benchmark datasets validate that CaSED outperforms other complex vision-language frameworks, while being efficient with much fewer parameters, paving the way for future research in this direction. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 370,208 |
2006.05698 | Rendering Natural Camera Bokeh Effect with Deep Learning | Bokeh is an important artistic effect used to highlight the main object of interest on the photo by blurring all out-of-focus areas. While DSLR and system camera lenses can render this effect naturally, mobile cameras are unable to produce shallow depth-of-field photos due to a very small aperture diameter of their optics. Unlike the current solutions simulating bokeh by applying Gaussian blur to image background, in this paper we propose to learn a realistic shallow focus technique directly from the photos produced by DSLR cameras. For this, we present a large-scale bokeh dataset consisting of 5K shallow / wide depth-of-field image pairs captured using the Canon 7D DSLR with 50mm f/1.8 lenses. We use these images to train a deep learning model to reproduce a natural bokeh effect based on a single narrow-aperture image. The experimental results show that the proposed approach is able to render a plausible non-uniform bokeh even in case of complex input data with multiple objects. The dataset, pre-trained models and codes used in this paper are available on the project website. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 181,172 |
1308.4904 | Proceedings Third International Workshop on Hybrid Autonomous Systems | The interest on autonomous systems is increasing both in industry and academia. Such systems must operate with limited human intervention in a changing environment and must be able to compensate for significant system failures without external intervention. The most appropriate models of autonomous systems can be found in the class of hybrid systems (which study continuous-state dynamic processes via discrete-state controllers) that interact with their environment. This workshop brings together researchers interested in all aspects of autonomy and resilience of hybrid systems. | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 26,576 |
1810.02118 | Infill Criterion for Multimodal Model-Based Optimisation | Physical systems are modelled and investigated within simulation software in an increasing range of applications. In reality an investigation of the system is often performed by empirical test scenarios which are related to typical situations. Our aim is to derive a method which generates diverse test scenarios each representing a challenging situation for the corresponding physical system. From a mathematical point of view challenging test scenarios correspond to local optima. Hence, we focus to identify all local optima within mathematical functions. Due to the fact that simulation runs are usually expensive we use the model-based optimisation approach with its well-known representative efficient global optimisation. We derive an infill criterion which focuses on the identification of local optima. The criterion is checked via fifteen different artificial functions in a computer experiment. Our new infill criterion performs better in identifying local optima compared to the expected improvement infill criterion and Latin Hypercube Samples. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 109,537 |
1605.05395 | Learning Deep Representations of Fine-grained Visual Descriptions | State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manually encoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch; i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech UCSD Birds 200-2011 dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 55,998 |
2411.15149 | The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots,
legal obligations and key elements for a model template | What is the context which gave rise to the obligation to carry out a Fundamental Rights Impact Assessment (FRIA) in the AI Act? How has assessment of the impact on fundamental rights been framed by the EU legislator in the AI Act? What methodological criteria should be followed in developing the FRIA? These are the three main research questions that this article aims to address, through both legal analysis of the relevant provisions of the AI Act and discussion of various possible models for assessment of the impact of AI on fundamental rights. The overall objective of this article is to fill existing gaps in the theoretical and methodological elaboration of the FRIA, as outlined in the AI Act. In order to facilitate the future work of EU and national bodies and AI operators in placing this key tool for human-centric and trustworthy AI at the heart of the EU approach to AI design and development, this article outlines the main building blocks of a model template for the FRIA. While this proposal is consistent with the rationale and scope of the AI Act, it is also applicable beyond the cases listed in Article 27 and can serve as a blueprint for other national and international regulatory initiatives to ensure that AI is fully consistent with human rights. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 510,462 |
2302.02946 | Development of an Immersive Virtual Colonoscopy Viewer for Colon Growths
Diagnosis | Desktop-based virtual colonoscopy has been proven to be an asset in the identification of colon anomalies. The process is accurate, although time-consuming. The use of immersive interfaces for virtual colonoscopy is incipient and not yet understood. In this work, we present a new design exploring elements of the VR paradigm to make the immersive analysis more efficient while still effective. We also plan the conduction of experiments with experts to assess the multi-factor influences of coverage, duration, and diagnostic accuracy. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 344,165 |
0909.2376 | Performing Hybrid Recommendation in Intermodal Transportation-the
FTMarket System's Recommendation Module | Diverse recommendation techniques have been already proposed and encapsulated into several e-business applications, aiming to perform a more accurate evaluation of the existing information and accordingly augment the assistance provided to the users involved. This paper reports on the development and integration of a recommendation module in an agent-based transportation transactions management system. The module is built according to a novel hybrid recommendation technique, which combines the advantages of collaborative filtering and knowledge-based approaches. The proposed technique and supporting module assist customers in considering in detail alternative transportation transactions that satisfy their requests, as well as in evaluating completed transactions. The related services are invoked through a software agent that constructs the appropriate knowledge rules and performs a synthesis of the recommendation policy. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 4,486 |
2303.05391 | Disambiguation of Company names via Deep Recurrent Networks | Name Entity Disambiguation is the Natural Language Processing task of identifying textual records corresponding to the same Named Entity, i.e. real-world entities represented as a list of attributes (names, places, organisations, etc.). In this work, we face the task of disambiguating companies on the basis of their written names. We propose a Siamese LSTM Network approach to extract -- via supervised learning -- an embedding of company name strings in a (relatively) low dimensional vector space and use this representation to identify pairs of company names that actually represent the same company (i.e. the same Entity). Given that the manual labelling of string pairs is a rather onerous task, we analyse how an Active Learning approach to prioritise the samples to be labelled leads to a more efficient overall learning pipeline. With empirical investigations, we show that our proposed Siamese Network outperforms several benchmark approaches based on standard string matching algorithms when enough labelled data are available. Moreover, we show that Active Learning prioritisation is indeed helpful when labelling resources are limited, and let the learning models reach the out-of-sample performance saturation with less labelled data with respect to standard (random) data labelling approaches. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | true | false | 350,448 |
2406.01870 | Understanding Stochastic Natural Gradient Variational Inference | Stochastic natural gradient variational inference (NGVI) is a popular posterior inference method with applications in various probabilistic models. Despite its wide usage, little is known about the non-asymptotic convergence rate in the \emph{stochastic} setting. We aim to lessen this gap and provide a better understanding. For conjugate likelihoods, we prove the first $\mathcal{O}(\frac{1}{T})$ non-asymptotic convergence rate of stochastic NGVI. The complexity is no worse than stochastic gradient descent (\aka black-box variational inference) and the rate likely has better constant dependency that leads to faster convergence in practice. For non-conjugate likelihoods, we show that stochastic NGVI with the canonical parameterization implicitly optimizes a non-convex objective. Thus, a global convergence rate of $\mathcal{O}(\frac{1}{T})$ is unlikely without some significant new understanding of optimizing the ELBO using natural gradients. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 460,498 |
1104.3250 | Adding noise to the input of a model trained with a regularized
objective | Regularization is a well studied problem in the context of neural networks. It is usually used to improve the generalization performance when the number of input samples is relatively small or heavily contaminated with noise. The regularization of a parametric model can be achieved in different manners some of which are early stopping (Morgan and Bourlard, 1990), weight decay, output smoothing that are used to avoid overfitting during the training of the considered model. From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters (Krogh and Hertz, 1991). Using Bishop's approximation (Bishop, 1995) of the objective function when a restricted type of noise is added to the input of a parametric function, we derive the higher order terms of the Taylor expansion and analyze the coefficients of the regularization terms induced by the noisy input. In particular we study the effect of penalizing the Hessian of the mapping function with respect to the input in terms of generalization performance. We also show how we can control independently this coefficient by explicitly penalizing the Jacobian of the mapping function on corrupted inputs. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 10,019 |
2201.07612 | ReGNL: Rapid Prediction of GDP during Disruptive Events using
Nightlights | Policy makers often make decisions based on parameters such as GDP, unemployment rate, industrial output, etc. The primary methods to obtain or even estimate such information are resource intensive and time consuming. In order to make timely and well-informed decisions, it is imperative to be able to come up with proxies for these parameters which can be sampled quickly and efficiently, especially during disruptive events, like the COVID-19 pandemic. Recently, there has been a lot of focus on using remote sensing data for this purpose. The data has become cheaper to collect compared to surveys, and can be available in real time. In this work, we present Regional GDP NightLight (ReGNL), a neural network based model which is trained on a custom dataset of historical nightlights and GDP data along with the geographical coordinates of a place, and estimates the GDP of the place, given the other parameters. Taking the case of 50 US states, we find that ReGNL is disruption-agnostic and is able to predict the GDP for both normal years (2019) and for years with a disruptive event (2020). ReGNL outperforms timeseries ARIMA methods for prediction, even during the pandemic. Following from our findings, we make a case for building infrastructures to collect and make available granular data, especially in resource-poor geographies, so that these can be leveraged for policy making during disruptive events. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 276,080 |
2108.00065 | Model Preserving Compression for Neural Networks | After training complex deep learning models, a common task is to compress the model to reduce compute and storage demands. When compressing, it is desirable to preserve the original model's per-example decisions (e.g., to go beyond top-1 accuracy or preserve robustness), maintain the network's structure, automatically determine per-layer compression levels, and eliminate the need for fine tuning. No existing compression methods simultaneously satisfy these criteria $\unicode{x2014}$ we introduce a principled approach that does by leveraging interpolative decompositions. Our approach simultaneously selects and eliminates channels (analogously, neurons), then constructs an interpolation matrix that propagates a correction into the next layer, preserving the network's structure. Consequently, our method achieves good performance even without fine tuning and admits theoretical analysis. Our theoretical generalization bound for a one layer network lends itself naturally to a heuristic that allows our method to automatically choose per-layer sizes for deep networks. We demonstrate the efficacy of our approach with strong empirical performance on a variety of tasks, models, and datasets $\unicode{x2014}$ from simple one-hidden-layer networks to deep networks on ImageNet. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 248,585 |
2106.15374 | Polynomial-Time Algorithms for Structurally Observable Graphs by
Controlling Minimal Vertices | The aim of this paper is to characterize an important class of marked digraphs, called structurally observable graphs (SOGs), and to solve two minimum realization problems. To begin with, by exploring structural observability of large-scale Boolean networks (LSBNs), an underlying type of SOGs is provided based on a recent observability criterion of conjunctive BNs. Besides, SOGs are also proved to have important applicability to structural observability of general discrete-time systems. Further, two minimum realization strategies are considered to induce an SOG from an arbitrarily given digraph by marking and controlling the minimal vertices, respectively. It indicates that one can induce an observable system by means of adding the minimal sensors or modifying the adjacency relation of minimal vertices. Finally, the structural observability of finite-field networks, and the minimum pinned node theorem for Boolean networks are displayed as application and simulation. The most salient superiority is that the designed algorithms are polynomial time and avoid exhaustive brute-force searches. It means that our results can be applied to deal with the observability of large-scale systems (particularly, LSBNs), whose observability analysis and the minimum controlled node theorem are known as intractable problems. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 243,747 |
2202.12407 | Gaussian Belief Trees for Chance Constrained Asymptotically Optimal
Motion Planning | In this paper, we address the problem of sampling-based motion planning under motion and measurement uncertainty with probabilistic guarantees. We generalize traditional sampling-based tree-based motion planning algorithms for deterministic systems and propose belief-$\mathcal{A}$, a framework that extends any kinodynamical tree-based planner to the belief space for linear (or linearizable) systems. We introduce appropriate sampling techniques and distance metrics for the belief space that preserve the probabilistic completeness and asymptotic optimality properties of the underlying planner. We demonstrate the efficacy of our approach for finding safe low-cost paths efficiently and asymptotically optimally in simulation, for both holonomic and non-holonomic systems. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 282,224 |
1802.01448 | Hardening Deep Neural Networks via Adversarial Model Cascades | Deep neural networks (DNNs) are vulnerable to malicious inputs crafted by an adversary to produce erroneous outputs. Works on securing neural networks against adversarial examples achieve high empirical robustness on simple datasets such as MNIST. However, these techniques are inadequate when empirically tested on complex data sets such as CIFAR-10 and SVHN. Further, existing techniques are designed to target specific attacks and fail to generalize across attacks. We propose the Adversarial Model Cascades (AMC) as a way to tackle the above inadequacies. Our approach trains a cascade of models sequentially where each model is optimized to be robust towards a mixture of multiple attacks. Ultimately, it yields a single model which is secure against a wide range of attacks; namely FGSM, Elastic, Virtual Adversarial Perturbations and Madry. On an average, AMC increases the model's empirical robustness against various attacks simultaneously, by a significant margin (of 6.225% for MNIST, 5.075% for SVHN and 2.65% for CIFAR10). At the same time, the model's performance on non-adversarial inputs is comparable to the state-of-the-art models. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 89,606 |
1108.3153 | Differential games of partial information forward-backward doubly
stochastic differential equations and applications | This paper is concerned with a new type of differential game problems of forwardbackward stochastic systems. There are three distinguishing features: Firstly, our game systems are forward-backward doubly stochastic differential equations, which is a class of more general game systems than other forward-backward stochastic game systems without doubly stochastic terms; Secondly, forward equations are directly related to backward equations at initial time, not terminal time; Thirdly, the admissible control is required to be adapted to a sub-information of the full information generated by the underlying Brownian motions. We give a necessary and a sufficient conditions for both an equilibrium point of nonzero-sum games and a saddle point of zero-sum games. Finally, we work out an example of linear-quadratic nonzero-sum differential games to illustrate the theoretical applications. Applying some stochastic filtering techniques, we obtain the explicit expression of the equilibrium point. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 11,675 |
2112.14297 | A Framework for the Joint Optimization of Assignment and Pricing in
Mobility-on-Demand Systems with Shared Rides | Mobility-on-Demand (MoD) systems have become a fixture in urban transportation networks, with the rapid growth of ride-hailing services such as Uber and Lyft. Ride-hailing is typically complemented with ridepooling options, which can reduce the negative externalities associated with ride-hailing services and increase the utilization of vehicles. Determining optimal policies for vehicle dispatching and pricing, two key components that enable MoD services, are challenging due to their massive scale and online nature. The challenge is amplified when the MoD platform offers exclusive (conventional ride-hailing) and shared services, and customers have the option to select between them. The pricing and dispatching problems are coupled because the realized demand depends on the quality of service (i.e., whom to share rides with) and the prices for each service type. We propose an integrated and computationally efficient method for solving the joint pricing and dispatching problem -- both when the problem is solved one request at a time or in batches (a common strategy in the industry). The main results of this research include showing that: (i) the sequential pricing problem has a closed-form solution under a multinomial logit (MNL) choice model, and (ii) the batched pricing problem is jointly concave in the expected demand distributions. To account for the spatial evolution of supply and demand, we introduce so-called retrospective costs to retain a tractable framework. Our numerical experiments demonstrate how this framework yields significant profit increases using taxicab data in Manhattan, New York City, compared to dynamic dispatching with static pricing policies. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 273,482 |
2306.02707 | Orca: Progressive Learning from Complex Explanation Traces of GPT-4 | Recent research has focused on enhancing the capability of smaller models through imitation learning, drawing on the outputs generated by large foundation models (LFMs). A number of issues impact the quality of these models, ranging from limited imitation signals from shallow LFM outputs; small scale homogeneous training data; and most notably a lack of rigorous evaluation resulting in overestimating the small model's capability as they tend to learn to imitate the style, but not the reasoning process of LFMs. To address these challenges, we develop Orca (We are working with our legal team to publicly release a diff of the model weights in accordance with LLaMA's release policy to be published at https://aka.ms/orca-lm), a 13-billion parameter model that learns to imitate the reasoning process of LFMs. Orca learns from rich signals from GPT-4 including explanation traces; step-by-step thought processes; and other complex instructions, guided by teacher assistance from ChatGPT. To promote this progressive learning, we tap into large-scale and diverse imitation data with judicious sampling and selection. Orca surpasses conventional state-of-the-art instruction-tuned models such as Vicuna-13B by more than 100% in complex zero-shot reasoning benchmarks like Big-Bench Hard (BBH) and 42% on AGIEval. Moreover, Orca reaches parity with ChatGPT on the BBH benchmark and shows competitive performance (4 pts gap with optimized system message) in professional and academic examinations like the SAT, LSAT, GRE, and GMAT, both in zero-shot settings without CoT; while trailing behind GPT-4. Our research indicates that learning from step-by-step explanations, whether these are generated by humans or more advanced AI models, is a promising direction to improve model capabilities and skills. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 371,021 |
2103.11630 | Efficient Processing of k-regret Minimization Queries with Theoretical
Guarantees | Assisting end users to identify desired results from a large dataset is an important problem for multi-criteria decision making. To address this problem, top-k and skyline queries have been widely adopted, but they both have inherent drawbacks, i.e., the user either has to provide a specific utility function or faces many results. The k-regret minimization query is proposed, which integrates the merits of top-k and skyline queries. Due to the NP-hardness of the problem, the k-regret minimization query is time consuming and the greedy framework is widely adopted. However, formal theoretical analysis of the greedy approaches for the quality of the returned results is still lacking. In this paper, we first fill this gap by conducting a nontrivial theoretical analysis of the approximation ratio of the returned results. To speed up query processing, a sampling-based method, StocPreGreed,, is developed to reduce the evaluation cost. In addition, a theoretical analysis of the required sample size is conducted to bound the quality of the returned results. Finally, comprehensive experiments are conducted on both real and synthetic datasets to demonstrate the efficiency and effectiveness of the proposed methods. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 225,880 |
2005.05516 | Framing Effects on Strategic Information Design under Receiver Distrust
and Unknown State | Strategic information design is a framework where a sender designs information strategically to steer its receiver's decision towards a desired choice. Traditionally, such frameworks have always assumed that the sender and the receiver comprehends the state of the choice environment, and that the receiver always trusts the sender's signal. This paper deviates from these assumptions and re-investigates strategic information design in the presence of distrustful receiver and when both sender and receiver cannot observe/comprehend the environment state space. Specifically, we assume that both sender and receiver has access to non-identical beliefs about choice rewards (with sender's belief being more accurate), but not the environment state that determines these rewards. Furthermore, given that the receiver does not trust the sender, we also assume that the receiver updates its prior in a non-Bayesian manner. We evaluate the Stackelberg equilibrium and investigate effects of information framing (i.e. send complete signal, or just expected value of the signal) on the equilibrium. Furthermore, we also investigate trust dynamics at the receiver, under the assumption that the receiver minimizes regret in hindsight. Simulation results are presented to illustrate signaling effects and trust dynamics in strategic information design. | false | false | false | false | false | true | false | false | false | false | true | false | false | false | true | false | false | true | 176,752 |
2006.02399 | ExKMC: Expanding Explainable $k$-Means Clustering | Despite the popularity of explainable AI, there is limited work on effective methods for unsupervised learning. We study algorithms for $k$-means clustering, focusing on a trade-off between explainability and accuracy. Following prior work, we use a small decision tree to partition a dataset into $k$ clusters. This enables us to explain each cluster assignment by a short sequence of single-feature thresholds. While larger trees produce more accurate clusterings, they also require more complex explanations. To allow flexibility, we develop a new explainable $k$-means clustering algorithm, ExKMC, that takes an additional parameter $k' \geq k$ and outputs a decision tree with $k'$ leaves. We use a new surrogate cost to efficiently expand the tree and to label the leaves with one of $k$ clusters. We prove that as $k'$ increases, the surrogate cost is non-increasing, and hence, we trade explainability for accuracy. Empirically, we validate that ExKMC produces a low cost clustering, outperforming both standard decision tree methods and other algorithms for explainable clustering. Implementation of ExKMC available at https://github.com/navefr/ExKMC. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 180,029 |
2205.12096 | A generalised, multi-phase-field theory for dissolution-driven stress
corrosion cracking and hydrogen embrittlement | We present a phase field-based electro-chemo-mechanical formulation for modelling mechanics-enhanced corrosion and hydrogen-assisted cracking in elastic-plastic solids. A multi-phase-field approach is used to present, for the first time, a general framework for stress corrosion cracking, incorporating both anodic dissolution and hydrogen embrittlement mechanisms. We numerically implement our theory using the finite element method and defining as primary kinematic variables the displacement components, the phase field corrosion order parameter, the metal ion concentration, the phase field fracture order parameter and the hydrogen concentration. Representative case studies are addressed to showcase the predictive capabilities of the model in various materials and environments, attaining a promising agreement with benchmark tests and experimental observations. We show that the generalised formulation presented can capture, as a function of the environment, the interplay between anodic dissolution- and hydrogen-driven failure mechanisms; including the transition from one to the other, their synergistic action and their individual occurrence. Such a generalised framework can bring new insight into environment-material interactions and the understanding of stress corrosion cracking, as demonstrated here by providing the first simulation results for Gruhl's seminal experiments. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 298,402 |
2208.05914 | 3, 2, 1, Drones Go! A Testbed to Take off UAV Swarm Intelligence for
Distributed Sensing | This paper introduces a testbed to study distributed sensing problems of Unmanned Aerial Vehicles (UAVs) exhibiting swarm intelligence. Several Smart City applications, such as transport and disaster response, require efficient collection of sensor data by a swarm of intelligent and cooperative UAVs. This often proves to be too complex and costly to study systematically and rigorously without compromising scale, realism and external validity. With the proposed testbed, this paper sets a stepping stone to emulate, within small laboratory spaces, large sensing areas of interest originated from empirical data and simulation models. Over this sensing map, a swarm of low-cost drones can fly allowing the study of a large spectrum of problems such as energy consumption, charging control, navigation and collision avoidance. The applicability of a decentralized multi-agent collective learning algorithm (EPOS) for UAV swarm intelligence along with the assessment of power consumption measurements provide a proof-of-concept and validate the accuracy of the proposed testbed. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 312,544 |
1803.08999 | LayoutNet: Reconstructing the 3D Room Layout from a Single RGB Image | We propose an algorithm to predict room layout from a single image that generalizes across panoramas and perspective images, cuboid layouts and more general layouts (e.g. L-shape room). Our method operates directly on the panoramic image, rather than decomposing into perspective images as do recent works. Our network architecture is similar to that of RoomNet, but we show improvements due to aligning the image based on vanishing points, predicting multiple layout elements (corners, boundaries, size and translation), and fitting a constrained Manhattan layout to the resulting predictions. Our method compares well in speed and accuracy to other existing work on panoramas, achieves among the best accuracy for perspective images, and can handle both cuboid-shaped and more general Manhattan layouts. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 93,391 |
2502.12303 | From Gaming to Research: GTA V for Synthetic Data Generation for
Robotics and Navigations | In computer vision, the development of robust algorithms capable of generalizing effectively in real-world scenarios more and more often requires large-scale datasets collected under diverse environmental conditions. However, acquiring such datasets is time-consuming, costly, and sometimes unfeasible. To address these limitations, the use of synthetic data has gained attention as a viable alternative, allowing researchers to generate vast amounts of data while simulating various environmental contexts in a controlled setting. In this study, we investigate the use of synthetic data in robotics and navigation, specifically focusing on Simultaneous Localization and Mapping (SLAM) and Visual Place Recognition (VPR). In particular, we introduce a synthetic dataset created using the virtual environment of the video game Grand Theft Auto V (GTA V), along with an algorithm designed to generate a VPR dataset, without human supervision. Through a series of experiments centered on SLAM and VPR, we demonstrate that synthetic data derived from GTA V are qualitatively comparable to real-world data. Furthermore, these synthetic data can complement or even substitute real-world data in these applications. This study sets the stage for the creation of large-scale synthetic datasets, offering a cost-effective and scalable solution for future research and development. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 534,793 |
1802.04172 | Coded Distributed Computing with Node Cooperation Substantially
Increases Speedup Factors | This work explores a distributed computing setting where $K$ nodes are assigned fractions (subtasks) of a computational task in order to perform the computation in parallel. In this setting, a well-known main bottleneck has been the inter-node communication cost required to parallelize the task, because unlike the computational cost which could keep decreasing as $K$ increases, the communication cost remains approximately constant, thus bounding the total speedup gains associated to having more computing nodes. This bottleneck was substantially ameliorated by the recent introduction of coded MapReduce techniques which allowed each node --- at the computational cost of having to preprocess approximately $t$ times more subtasks --- to reduce its communication cost by approximately $t$ times. In reality though, the associated speed up gains were severely limited by the requirement that larger $t$ and $K$ necessitated that the original task be divided into an extremely large number of subtasks. In this work we show how node cooperation, along with a novel assignment of tasks, can help to dramatically ameliorate this limitation. The result applies to wired as well as wireless distributed computing, and it is based on the idea of having groups of nodes compute identical parallelization (mapping) tasks and then employing a here-proposed novel D2D coded caching algorithm. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 90,159 |
1604.03212 | Recommendations for web service composition by mining usage logs | Web service composition has been one of the most researched topics of the past decade. Novel methods of web service composition are being proposed in the literature include Semantics-based composition, WSDLbased composition. Although these methods provide promising results for composition, search and discovery of web service based on QoS parameter of network and semantics or ontology associated with WSDL, they do not address composition based on usage of web service. Web Service usage logs capture time series data of web service invocation by business objects, which innately captures patterns or workflows associated with business operations. Web service composition based on such patterns and workflows can greatly streamline the business operations. In this research work, we try to explore and implement methods of mining web service usage logs. Main objectives include Identifying usage association of services. Linking one service invocation with other, Evaluation of the causal relationship between associations of services | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 54,457 |
2208.00885 | Many-to-One Knowledge Distillation of Real-Time Epileptic Seizure
Detection for Low-Power Wearable Internet of Things Systems | Integrating low-power wearable Internet of Things (IoT) systems into routine health monitoring is an ongoing challenge. Recent advances in the computation capabilities of wearables make it possible to target complex scenarios by exploiting multiple biosignals and using high-performance algorithms, such as Deep Neural Networks (DNNs). There is, however, a trade-off between performance of the algorithms and the low-power requirements of IoT platforms with limited resources. Besides, physically larger and multi-biosignal-based wearables bring significant discomfort to the patients. Consequently, reducing power consumption and discomfort is necessary for patients to use IoT devices continuously during everyday life. To overcome these challenges, in the context of epileptic seizure detection, we propose a many-to-one signals knowledge distillation approach targeting single-biosignal processing in IoT wearable systems. The starting point is to get a highly-accurate multi-biosignal DNN, then apply our approach to develop a single-biosignal DNN solution for IoT systems that achieves an accuracy comparable to the original multi-biosignal DNN. To assess the practicality of our approach to real-life scenarios, we perform a comprehensive simulation experiment analysis on several state-of-the-art edge computing platforms, such as Kendryte K210 and Raspberry Pi Zero. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 311,009 |
2306.01881 | Hardware-in-the-Loop and Road Testing of RLVW and GLOSA Connected
Vehicle Applications | This paper presents an evaluation of two different Vehicle to Infrastructure (V2I) applications, namely Red Light Violation Warning (RLVW) and Green Light Optimized Speed Advisory (GLOSA). The evaluation method is to first develop and use Hardware-in-the-Loop (HIL) simulator testing, followed by extension of the HIL testing to road testing using an experimental connected vehicle. The HIL simulator used in the testing is a state-of-the-art simulator that consists of the same hardware like the road side unit and traffic cabinet as is used in real intersections and allows testing of numerous different traffic and intersection geometry and timing scenarios realistically. First, the RLVW V2I algorithm is tested in the HIL simulator and then implemented in an On-Board-Unit (OBU) in our experimental vehicle and tested at real world intersections. This same approach of HIL testing followed by testing in real intersections using our experimental vehicle is later extended to the GLOSA application. The GLOSA application that is tested in this paper has both an optimal speed advisory for passing at the green light and also includes a red light violation warning system. The paper presents the HIL and experimental vehicle evaluation systems, information about RLVW and GLOSA and HIL simulation and road testing results and their interpretations. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 370,652 |
1904.08109 | Contextual Aware Joint Probability Model Towards Question Answering
System | In this paper, we address the question answering challenge with the SQuAD 2.0 dataset. We design a model architecture which leverages BERT's capability of context-aware word embeddings and BiDAF's context interactive exploration mechanism. By integrating these two state-of-the-art architectures, our system tries to extract the contextual word representation at word and character levels, for better comprehension of both question and context and their correlations. We also propose our original joint posterior probability predictor module and its associated loss functions. Our best model so far obtains F1 score of 75.842% and EM score of 72.24% on the test PCE leaderboad. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 127,976 |
2005.01349 | A Directed Spanning Tree Adaptive Control Framework for Time-Varying
Formations | In this paper, the time-varying formation and time-varying formation tracking problems are solved for linear multi-agent systems over digraphs without the knowledge of the eigenvalues of the Laplacian matrix associated to the digraph. The solution to these problems relies on a framework that generalizes the directed spanning tree adaptive method, which was originally limited to consensus problems. Necessary and sufficient conditions for the existence of solutions to the formation problems are derived. Asymptotic convergence of the formation errors is proved via graph theory and Lyapunov analysis. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 175,556 |
1803.07739 | Assessing Shape Bias Property of Convolutional Neural Networks | It is known that humans display "shape bias" when classifying new items, i.e., they prefer to categorize objects based on their shape rather than color. Convolutional Neural Networks (CNNs) are also designed to take into account the spatial structure of image data. In fact, experiments on image datasets, consisting of triples of a probe image, a shape-match and a color-match, have shown that one-shot learning models display shape bias as well. In this paper, we examine the shape bias property of CNNs. In order to conduct large scale experiments, we propose using the model accuracy on images with reversed brightness as a metric to evaluate the shape bias property. Such images, called negative images, contain objects that have the same shape as original images, but with different colors. Through extensive systematic experiments, we investigate the role of different factors, such as training data, model architecture, initialization and regularization techniques, on the shape bias property of CNNs. We show that it is possible to design different CNNs that achieve similar accuracy on original images, but perform significantly different on negative images, suggesting that CNNs do not intrinsically display shape bias. We then show that CNNs are able to learn and generalize the structures, when the model is properly initialized or data is properly augmented, and if batch normalization is used. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 93,120 |
2012.00357 | Efficient Data Structures for Model-free Data-Driven Computational
Mechanics | The data-driven computing paradigm initially introduced by Kirchdoerfer and Ortiz (2016) enables finite element computations in solid mechanics to be performed directly from material data sets, without an explicit material model. From a computational effort point of view, the most challenging task is the projection of admissible states at material points onto their closest states in the material data set. In this study, we compare and develop several possible data structures for solving the nearest-neighbor problem. We show that approximate nearest-neighbor (ANN) algorithms can accelerate material data searches by several orders of magnitude relative to exact searching algorithms. The approximations are suggested by--and adapted to--the structure of the data-driven iterative solver and result in no significant loss of solution accuracy. We assess the performance of the ANN algorithm with respect to material data set size with the aid of a 3D elasticity test case. We show that computations on a single processor with up to one billion material data points are feasible within a few seconds execution time with a speedup of more than 106 with respect to exact k-d trees. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 209,112 |
2208.09577 | Real-time Short Video Recommendation on Mobile Devices | Short video applications have attracted billions of users in recent years, fulfilling their various needs with diverse content. Users usually watch short videos on many topics on mobile devices in a short period of time, and give explicit or implicit feedback very quickly to the short videos they watch. The recommender system needs to perceive users' preferences in real-time in order to satisfy their changing interests. Traditionally, recommender systems deployed at server side return a ranked list of videos for each request from client. Thus it cannot adjust the recommendation results according to the user's real-time feedback before the next request. Due to client-server transmitting latency, it is also unable to make immediate use of users' real-time feedback. However, as users continue to watch videos and feedback, the changing context leads the ranking of the server-side recommendation system inaccurate. In this paper, we propose to deploy a short video recommendation framework on mobile devices to solve these problems. Specifically, we design and deploy a tiny on-device ranking model to enable real-time re-ranking of server-side recommendation results. We improve its prediction accuracy by exploiting users' real-time feedback of watched videos and client-specific real-time features. With more accurate predictions, we further consider interactions among candidate videos, and propose a context-aware re-ranking method based on adaptive beam search. The framework has been deployed on Kuaishou, a billion-user scale short video application, and improved effective view, like and follow by 1.28%, 8.22% and 13.6% respectively. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 313,743 |
2207.10282 | An Evolutionary Game based Secure Clustering Protocol with Fuzzy Trust
Evaluation and Outlier Detection for Wireless Sensor Networks | Trustworthy and reliable data delivery is a challenging task in Wireless Sensor Networks (WSNs) due to unique characteristics and constraints. To acquire secured data delivery and address the conflict between security and energy, in this paper we present an evolutionary game based secure clustering protocol with fuzzy trust evaluation and outlier detection for WSNs. Firstly, a fuzzy trust evaluation method is presented to transform the transmission evidences into trust values while effectively alleviating the trust uncertainty. And then, a K-Means based outlier detection scheme is proposed to further analyze plenty of trust values obtained via fuzzy trust evaluation or trust recommendation. It can discover the commonalities and differences among sensor nodes while improving the accuracy of outlier detection. Finally, we present an evolutionary game based secure clustering protocol to achieve a trade-off between security assurance and energy saving for sensor nodes when electing for the cluster heads. A sensor node which failed to be the cluster head can securely choose its own head by isolating the suspicious nodes. Simulation results verify that our secure clustering protocol can effectively defend the network against the attacks from internal selfish or compromised nodes. Correspondingly, the timely data transfer rate can be improved significantly. | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | true | 309,199 |
2010.00463 | A Finite Memory Interacting P\'{o}lya Contagion Network and its
Approximating Dynamical Systems | We introduce a new model for contagion spread using a network of interacting finite memory two-color P\'{o}lya urns, which we refer to as the finite memory interacting P\'{o}lya contagion network. The urns interact in the sense that the probability of drawing a red ball (which represents an infection state) for a given urn, not only depends on the ratio of red balls in that urn but also on the ratio of red balls in the other urns in the network, hence accounting for the effect of spatial contagion. The resulting network-wide contagion process is a discrete-time finite-memory ($M$th order) Markov process, whose transition probability matrix is determined. The stochastic properties of the network contagion Markov process are analytically examined, and for homogeneous system parameters, we characterize the limiting state of infection in each urn. For the non-homogeneous case, given the complexity of the stochastic process, and in the same spirit as the well-studied SIS models, we use a mean-field type approximation to obtain a discrete-time dynamical system for the finite memory interacting P\'{o}lya contagion network. Interestingly, for $M=1$, we obtain a linear dynamical system which exactly represents the corresponding Markov process. For $M>1$, we use mean-field approximation to obtain a nonlinear dynamical system. Furthermore, noting that the latter dynamical system admits a linear variant (realized by retaining its leading linear terms), we study the asymptotic behavior of the linear systems for both memory modes and characterize their equilibrium. Finally, we present simulation studies to assess the quality of the approximation purveyed by the linear and non-linear dynamical systems. | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 198,296 |
2106.02862 | Antenna Array Diagnosis for Millimeter-Wave MIMO Systems | The densely packed antennas of millimeter-Wave (mmWave) MIMO systems are often blocked by the rain, snow, dust and even by fingers, which will change the channel's characteristics and degrades the system's performance. In order to solve this problem, we propose a cross-entropy inspired antenna array diagnosis detection (CE-AAD) technique by exploiting the correlations of adjacent antennas, when blockages occur at the transmitter. Then, we extend the proposed CE-AAD algorithm to the case, where blockages occur at transmitter and receiver simultaneously. Our simulation results show that the proposed CE-AAD algorithm outperforms its traditional counterparts. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 239,066 |
2008.11185 | Bias-Awareness for Zero-Shot Learning the Seen and Unseen | Generalized zero-shot learning recognizes inputs from both seen and unseen classes. Yet, existing methods tend to be biased towards the classes seen during training. In this paper, we strive to mitigate this bias. We propose a bias-aware learner to map inputs to a semantic embedding space for generalized zero-shot learning. During training, the model learns to regress to real-valued class prototypes in the embedding space with temperature scaling, while a margin-based bidirectional entropy term regularizes seen and unseen probabilities. Relying on a real-valued semantic embedding space provides a versatile approach, as the model can operate on different types of semantic information for both seen and unseen classes. Experiments are carried out on four benchmarks for generalized zero-shot learning and demonstrate the benefits of the proposed bias-aware classifier, both as a stand-alone method or in combination with generated features. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 193,198 |
2409.03282 | Interpretable mixture of experts for time series prediction under
recurrent and non-recurrent conditions | Non-recurrent conditions caused by incidents are different from recurrent conditions that follow periodic patterns. Existing traffic speed prediction studies are incident-agnostic and use one single model to learn all possible patterns from these drastically diverse conditions. This study proposes a novel Mixture of Experts (MoE) model to improve traffic speed prediction under two separate conditions, recurrent and non-recurrent (i.e., with and without incidents). The MoE leverages separate recurrent and non-recurrent expert models (Temporal Fusion Transformers) to capture the distinct patterns of each traffic condition. Additionally, we propose a training pipeline for non-recurrent models to remedy the limited data issues. To train our model, multi-source datasets, including traffic speed, incident reports, and weather data, are integrated and processed to be informative features. Evaluations on a real road network demonstrate that the MoE achieves lower errors compared to other benchmark algorithms. The model predictions are interpreted in terms of temporal dependencies and variable importance in each condition separately to shed light on the differences between recurrent and non-recurrent conditions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 485,998 |
1410.4017 | Online Tracking of Skin Colour Regions Against a Complex Background | Online tracking of human activity against a complex background is a challenging task for many applications. In this paper, we have developed a robust technique for localizing skin colour regions from unconstrained image frames. A simple and fast segmentation algorithm is used to train a multiplayer perceptron (MLP) for detection of skin colours. Stepper motors are synchronized with the MLP to track the movement of the skin colour regions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 36,764 |
2101.11870 | Strategic Argumentation Dialogues for Persuasion: Framework and
Experiments Based on Modelling the Beliefs and Concerns of the Persuadee | Persuasion is an important and yet complex aspect of human intelligence. When undertaken through dialogue, the deployment of good arguments, and therefore counterarguments, clearly has a significant effect on the ability to be successful in persuasion. Two key dimensions for determining whether an argument is good in a particular dialogue are the degree to which the intended audience believes the argument and counterarguments, and the impact that the argument has on the concerns of the intended audience. In this paper, we present a framework for modelling persuadees in terms of their beliefs and concerns, and for harnessing these models in optimizing the choice of move in persuasion dialogues. Our approach is based on the Monte Carlo Tree Search which allows optimization in real-time. We provide empirical results of a study with human participants showing that our automated persuasion system based on this technology is superior to a baseline system that does not take the beliefs and concerns into account in its strategy. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 217,412 |
2006.10641 | Shannon meets Myerson: Information Extraction from a Strategic Sender | We study a setting where a receiver must design a questionnaire to recover a sequence of symbols known to strategic sender, whose utility may not be incentive compatible. We allow the receiver the possibility of selecting the alternatives presented in the questionnaire, and thereby linking decisions across the components of the sequence. We show that, despite the strategic sender and the noise in the channel, the receiver can recover exponentially many sequences, but also that exponentially many sequences are unrecoverable even by the best strategy. We define the growth rate of the number of recovered sequences as the information extraction capacity. A generalization of the Shannon capacity, it characterizes the optimal amount of communication resources required. We derive bounds leading to an exact evaluation of the information extraction capacity in many cases. Our results form the building blocks of a novel, noncooperative regime of communication involving a strategic sender. | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | false | true | 182,957 |
2311.12562 | Multi-Resolution Planar Region Extraction for Uneven Terrains | This paper studies the problem of extracting planar regions in uneven terrains from unordered point cloud measurements. Such a problem is critical in various robotic applications such as robotic perceptive locomotion. While existing approaches have shown promising results in effectively extracting planar regions from the environment, they often suffer from issues such as low computational efficiency or loss of resolution. To address these issues, we propose a multi-resolution planar region extraction strategy in this paper that balances the accuracy in boundaries and computational efficiency. Our method begins with a pointwise classification preprocessing module, which categorizes all sampled points according to their local geometric properties to facilitate multi-resolution segmentation. Subsequently, we arrange the categorized points using an octree, followed by an in-depth analysis of nodes to finish multi-resolution plane segmentation. The efficiency and robustness of the proposed approach are verified via synthetic and real-world experiments, demonstrating our method's ability to generalize effectively across various uneven terrains while maintaining real-time performance, achieving frame rates exceeding 35 FPS. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 409,379 |
2211.03345 | Over-the-Air Integrated Sensing, Communication, and Computation in IoT
Networks | To facilitate the development of Internet of Things (IoT) services, tremendous IoT devices are deployed in the wireless network to collect and pass data to the server for further processing. Aiming at improving the data sensing and delivering efficiency, the integrated sensing and communication (ISAC) technique has been proposed to design dual-functional signals for both radar sensing and data communication. To accelerate the data processing, the function computation via signal transmission is enabled by over-the-air computation (AirComp), which is based on the analog-wave addition property in a multi-access channel. As a natural combination, the emerging technology namely over-the-air integrated sensing, communication, and computation (Air-ISCC) adopts both the promising performances of ISAC and AirComp to improve the spectrum efficiency and reduce latency by enabling simultaneous sensing, communication, and computation. In this article, we provide a promptly overview of Air-ISCC by introducing the fundamentals, discussing the advanced techniques, and identifying the applications. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 328,905 |
1810.03594 | Proximal Online Gradient is Optimum for Dynamic Regret | In online learning, the dynamic regret metric chooses the reference (optimal) solution that may change over time, while the typical (static) regret metric assumes the reference solution to be constant over the whole time horizon. The dynamic regret metric is particularly interesting for applications such as online recommendation (since the customers' preference always evolves over time). While the online gradient method has been shown to be optimal for the static regret metric, the optimal algorithm for the dynamic regret remains unknown. In this paper, we show that proximal online gradient (a general version of online gradient) is optimum to the dynamic regret by showing that the proved lower bound matches the upper bound that slightly improves existing upper bound. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 109,844 |
2410.20981 | EEG-Driven 3D Object Reconstruction with Style Consistency and Diffusion
Prior | Electroencephalography (EEG)-based visual perception reconstruction has become an important area of research. Neuroscientific studies indicate that humans can decode imagined 3D objects by perceiving or imagining various visual information, such as color, shape, and rotation. Existing EEG-based visual decoding methods typically focus only on the reconstruction of 2D visual stimulus images and face various challenges in generation quality, including inconsistencies in texture, shape, and color between the visual stimuli and the reconstructed images. This paper proposes an EEG-based 3D object reconstruction method with style consistency and diffusion priors. The method consists of an EEG-driven multi-task joint learning stage and an EEG-to-3D diffusion stage. The first stage uses a neural EEG encoder based on regional semantic learning, employing a multi-task joint learning scheme that includes a masked EEG signal recovery task and an EEG based visual classification task. The second stage introduces a latent diffusion model (LDM) fine-tuning strategy with style-conditioned constraints and a neural radiance field (NeRF) optimization strategy. This strategy explicitly embeds semantic- and location-aware latent EEG codes and combines them with visual stimulus maps to fine-tune the LDM. The fine-tuned LDM serves as a diffusion prior, which, combined with the style loss of visual stimuli, is used to optimize NeRF for generating 3D objects. Finally, through experimental validation, we demonstrate that this method can effectively use EEG data to reconstruct 3D objects with style consistency. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 503,040 |
2303.09503 | The Intel Neuromorphic DNS Challenge | A critical enabler for progress in neuromorphic computing research is the ability to transparently evaluate different neuromorphic solutions on important tasks and to compare them to state-of-the-art conventional solutions. The Intel Neuromorphic Deep Noise Suppression Challenge (Intel N-DNS Challenge), inspired by the Microsoft DNS Challenge, tackles a ubiquitous and commercially relevant task: real-time audio denoising. Audio denoising is likely to reap the benefits of neuromorphic computing due to its low-bandwidth, temporal nature and its relevance for low-power devices. The Intel N-DNS Challenge consists of two tracks: a simulation-based algorithmic track to encourage algorithmic innovation, and a neuromorphic hardware (Loihi 2) track to rigorously evaluate solutions. For both tracks, we specify an evaluation methodology based on energy, latency, and resource consumption in addition to output audio quality. We make the Intel N-DNS Challenge dataset scripts and evaluation code freely accessible, encourage community participation with monetary prizes, and release a neuromorphic baseline solution which shows promising audio quality, high power efficiency, and low resource consumption when compared to Microsoft NsNet2 and a proprietary Intel denoising model used in production. We hope the Intel N-DNS Challenge will hasten innovation in neuromorphic algorithms research, especially in the area of training tools and methods for real-time signal processing. We expect the winners of the challenge will demonstrate that for problems like audio denoising, significant gains in power and resources can be realized on neuromorphic devices available today compared to conventional state-of-the-art solutions. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 352,072 |
1804.11297 | Sampling strategies in Siamese Networks for unsupervised speech
representation learning | Recent studies have investigated siamese network architectures for learning invariant speech representations using same-different side information at the word level. Here we investigate systematically an often ignored component of siamese networks: the sampling procedure (how pairs of same vs. different tokens are selected). We show that sampling strategies taking into account Zipf's Law, the distribution of speakers and the proportions of same and different pairs of words significantly impact the performance of the network. In particular, we show that word frequency compression improves learning across a large range of variations in number of training pairs. This effect does not apply to the same extent to the fully unsupervised setting, where the pairs of same-different words are obtained by spoken term discovery. We apply these results to pairs of words discovered using an unsupervised algorithm and show an improvement on state-of-the-art in unsupervised representation learning using siamese networks. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 96,342 |
2104.06806 | Continual learning in cross-modal retrieval | Multimodal representations and continual learning are two areas closely related to human intelligence. The former considers the learning of shared representation spaces where information from different modalities can be compared and integrated (we focus on cross-modal retrieval between language and visual representations). The latter studies how to prevent forgetting a previously learned task when learning a new one. While humans excel in these two aspects, deep neural networks are still quite limited. In this paper, we propose a combination of both problems into a continual cross-modal retrieval setting, where we study how the catastrophic interference caused by new tasks impacts the embedding spaces and their cross-modal alignment required for effective retrieval. We propose a general framework that decouples the training, indexing and querying stages. We also identify and study different factors that may lead to forgetting, and propose tools to alleviate it. We found that the indexing stage pays an important role and that simply avoiding reindexing the database with updated embedding networks can lead to significant gains. We evaluated our methods in two image-text retrieval datasets, obtaining significant gains with respect to the fine tuning baseline. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 230,205 |
2401.08139 | Transferring Core Knowledge via Learngenes | The pre-training paradigm fine-tunes the models trained on large-scale datasets to downstream tasks with enhanced performance. It transfers all knowledge to downstream tasks without discriminating which part is necessary or unnecessary, which may lead to negative transfer. In comparison, knowledge transfer in nature is much more efficient. When passing genetic information to descendants, ancestors encode only the essential knowledge into genes, which act as the medium. Inspired by that, we adopt a recent concept called ``learngene'' and refine its structures by mimicking the structures of natural genes. We propose the Genetic Transfer Learning (GTL) -- a framework to copy the evolutionary process of organisms into neural networks. GTL trains a population of networks, selects superior learngenes by tournaments, performs learngene mutations, and passes the learngenes to next generations. Finally, we successfully extract the learngenes of VGG11 and ResNet12. We show that the learngenes bring the descendant networks instincts and strong learning ability: with 20% parameters, the learngenes bring 12% and 16% improvements of accuracy on CIFAR-FS and miniImageNet. Besides, the learngenes have the scalability and adaptability on the downstream structure of networks and datasets. Overall, we offer a novel insight that transferring core knowledge via learngenes may be sufficient and efficient for neural networks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 421,780 |
2402.19414 | Higher-Order Networks Representation and Learning: A Survey | Network data has become widespread, larger, and more complex over the years. Traditional network data is dyadic, capturing the relations among pairs of entities. With the need to model interactions among more than two entities, significant research has focused on higher-order networks and ways to represent, analyze, and learn from them. There are two main directions to studying higher-order networks. One direction has focused on capturing higher-order patterns in traditional (dyadic) graphs by changing the basic unit of study from nodes to small frequently observed subgraphs, called motifs. As most existing network data comes in the form of pairwise dyadic relationships, studying higher-order structures within such graphs may uncover new insights. The second direction aims to directly model higher-order interactions using new and more complex representations such as simplicial complexes or hypergraphs. Some of these models have long been proposed, but improvements in computational power and the advent of new computational techniques have increased their popularity. Our goal in this paper is to provide a succinct yet comprehensive summary of the advanced higher-order network analysis techniques. We provide a systematic review of its foundations and algorithms, along with use cases and applications of higher-order networks in various scientific domains. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 433,792 |
1901.07509 | Single-Server Multi-Message Individually-Private Information Retrieval
with Side Information | We consider a multi-user variant of the private information retrieval problem described as follows. Suppose there are $D$ users, each of which wants to privately retrieve a distinct message from a server with the help of a trusted agent. We assume that the agent has a random subset of $M$ messages that is not known to the server. The goal of the agent is to collectively retrieve the users' requests from the server. For protecting the privacy of users, we introduce the notion of individual-privacy -- the agent is required to protect the privacy only for each individual user (but may leak some correlations among user requests). We refer to this problem as Individually-Private Information Retrieval with Side Information (IPIR-SI). We first establish a lower bound on the capacity, which is defined as the maximum achievable download rate, of the IPIR-SI problem by presenting a novel achievability protocol. Next, we characterize the capacity of IPIR-SI problem for $M = 1$ and $D = 2$. In the process of characterizing the capacity for arbitrary $M$ and $D$ we present a novel combinatorial conjecture, that may be of independent interest. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 119,229 |
2203.00918 | Smart Tracking Tray System for A Smart and Sustainable Wet Lab Community | The laboratories and research institutes are the major places for cutting-edge scientific exploration. Hundreds of millions of research papers were formed from front-line labs. Behind this glorious achievement were unsustainable facts. More and more human investment is required in innovative experimental design and analysis of results. However, the laboratory operating environment has not been subversively transformed for centuries. This abstract proposed a smart tracking system, consisting of IoT and Data Visualization technologies, to track the chemicals in an automatic and timely approach. Positive feedback has been collected from pilot tests in several labs. The system benefits various lab users in their daily work and improves their working efficiency. In the long run, it will play an essential role in promoting the efficient use of lab resources and achieving the goal of sustainable labs. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 283,181 |
2104.04989 | NorDial: A Preliminary Corpus of Written Norwegian Dialect Use | Norway has a large amount of dialectal variation, as well as a general tolerance to its use in the public sphere. There are, however, few available resources to study this variation and its change over time and in more informal areas, \eg on social media. In this paper, we propose a first step to creating a corpus of dialectal variation of written Norwegian. We collect a small corpus of tweets and manually annotate them as Bokm{\aa}l, Nynorsk, any dialect, or a mix. We further perform preliminary experiments with state-of-the-art models, as well as an analysis of the data to expand this corpus in the future. Finally, we make the annotations and models available for future work. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 229,561 |
2312.06378 | Density-based isogeometric topology optimization of shell structures | Shell structures with a high stiffness-to-weight ratio are desirable in various engineering applications. In such scenarios, topology optimization serves as a popular and effective tool for shell structures design. Among the topology optimization methods, solid isotropic material with penalization method(SIMP) is often chosen due to its simplicity and convenience. However, SIMP method is typically integrated with conventional finite element analysis(FEA) which has limitations in computational accuracy. Achieving high accuracy with FEA needs a substantial number of elements, leading to computational burdens. In addition, the discrete representation of the material distribution may result in rough boundaries and checkerboard structures. To overcome these challenges, this paper proposes an isogeometric analysis(IGA) based SIMP method for optimizing the topology of shell structures based on Reissner-Mindlin theory. We use NURBS to represent both the shell structure and the material distribution function with the same basis functions, allowing for higher accuracy and smoother boundaries. The optimization model takes compliance as the objective function with a volume fraction constraint and the coefficients of the density function as design variables. The Method of Moving Asymptotes is employed to solve the optimization problem, resulting in an optimized shell structure defined by the material distribution function. To obtain fairing boundaries in the optimized shell structure, further process is conducted by fitting the boundaries with fair B-spline curves automatically. Furthermore, the IGA-SIMP framework is applied to generate porous shell structures by imposing different local volume fraction constraints. Numerical examples are provided to demonstrate the feasibility and efficiency of the IGA-SIMP method, showing that it outperforms the FEA-SIMP method and produces smoother boundaries. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 414,496 |
1704.00421 | Review on Requirements Modeling and Analysis for Self-Adaptive Systems:
A Ten-Year Perspective | Context: Over the last decade, software researchers and engineers have developed a vast body of methodologies and technologies in requirements engineering for self-adaptive systems. Although existing studies have explored various aspects of this field, no systematic study has been performed on summarizing modeling methods and corresponding requirements activities. Objective: This study summarizes the state-of-the-art research trends, details the modeling methods and corresponding requirements activities, identifies relevant quality attributes and application domains and assesses the quality of each study. Method: We perform a systematic literature review underpinned by a rigorously established and reviewed protocol. To ensure the quality of the study, we choose 21 highly regarded publication venues and 8 popular digital libraries. In addition, we apply text mining to derive search strings and use Kappa coefficient to mitigate disagreements of researchers. Results: We selected 109 papers during the period of 2003-2013 and presented the research distributions over various kinds of factors. We extracted 29 modeling methods which are classified into 8 categories and identified 14 requirements activities which are classified into 4 requirements timelines. We captured 8 concerned software quality attributes based on the ISO 9126 standard and 12 application domains. Conclusion: The frequency of application of modeling methods varies greatly. Enterprise models were more widely used while behavior models were more rigorously evaluated. Requirements-driven runtime adaptation was the most frequently studied requirements activity. Activities at runtime were conveyed with more details. Finally, we draw other conclusions by discussing how well modeling dimensions were considered in these modeling methods and how well assurance dimensions were conveyed in requirements activities. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 71,082 |
1803.00179 | Matching Natural Language Sentences with Hierarchical Sentence
Factorization | Semantic matching of natural language sentences or identifying the relationship between two sentences is a core research problem underlying many natural language tasks. Depending on whether training data is available, prior research has proposed both unsupervised distance-based schemes and supervised deep learning schemes for sentence matching. However, previous approaches either omit or fail to fully utilize the ordered, hierarchical, and flexible structures of language objects, as well as the interactions between them. In this paper, we propose Hierarchical Sentence Factorization---a technique to factorize a sentence into a hierarchical representation, with the components at each different scale reordered into a "predicate-argument" form. The proposed sentence factorization technique leads to the invention of: 1) a new unsupervised distance metric which calculates the semantic distance between a pair of text snippets by solving a penalized optimal transport problem while preserving the logical relationship of words in the reordered sentences, and 2) new multi-scale deep learning models for supervised semantic training, based on factorized sentence hierarchies. We apply our techniques to text-pair similarity estimation and text-pair relationship classification tasks, based on multiple datasets such as STSbenchmark, the Microsoft Research paraphrase identification (MSRP) dataset, the SICK dataset, etc. Extensive experiments show that the proposed hierarchical sentence factorization can be used to significantly improve the performance of existing unsupervised distance-based metrics as well as multiple supervised deep learning models based on the convolutional neural network (CNN) and long short-term memory (LSTM). | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 91,612 |
2203.11368 | Audio visual character profiles for detecting background characters in
entertainment media | An essential goal of computational media intelligence is to support understanding how media stories -- be it news, commercial or entertainment media -- represent and reflect society and these portrayals are perceived. People are a central element of media stories. This paper focuses on understanding the representation and depiction of background characters in media depictions, primarily movies and TV shows. We define the background characters as those who do not participate vocally in any scene throughout the movie and address the problem of localizing background characters in videos. We use an active speaker localization system to extract high-confidence face-speech associations and generate audio-visual profiles for talking characters in a movie by automatically clustering them. Using a face verification system, we then prune all the face-tracks which match any of the generated character profiles and obtain the background character face-tracks. We curate a background character dataset which provides annotations for background character for a set of TV shows, and use it to evaluate the performance of the background character detection framework. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 286,881 |
2408.06716 | Towards Cross-Domain Single Blood Cell Image Classification via
Large-Scale LoRA-based Segment Anything Model | Accurate classification of blood cells plays a vital role in hematological analysis as it aids physicians in diagnosing various medical conditions. In this study, we present a novel approach for classifying blood cell images known as BC-SAM. BC-SAM leverages the large-scale foundation model of Segment Anything Model (SAM) and incorporates a fine-tuning technique using LoRA, allowing it to extract general image embeddings from blood cell images. To enhance the applicability of BC-SAM across different blood cell image datasets, we introduce an unsupervised cross-domain autoencoder that focuses on learning intrinsic features while suppressing artifacts in the images. To assess the performance of BC-SAM, we employ four widely used machine learning classifiers (Random Forest, Support Vector Machine, Artificial Neural Network, and XGBoost) to construct blood cell classification models and compare them against existing state-of-the-art methods. Experimental results conducted on two publicly available blood cell datasets (Matek-19 and Acevedo-20) demonstrate that our proposed BC-SAM achieves a new state-of-the-art result, surpassing the baseline methods with a significant improvement. The source code of this paper is available at https://github.com/AnoK3111/BC-SAM. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 480,313 |
2303.05360 | Proceedings 11th International Workshop on Theorem Proving Components
for Educational Software | The ThEdu series pursues the smooth transition from an intuitive way of doing mathematics at secondary school to a more formal approach to the subject in STEM education, while favouring software support for this transition by exploiting the power of theorem-proving technologies. What follows is a brief description of how the present volume contributes to this enterprise. The 11th International Workshop on Theorem Proving Components for Educational Software (ThEdu'22), was a satellite event of the 8th Federated Logic Conference (FLoC 2022), July 31-August 12, 2022, Haifa, Israel ThEdu'22 was a vibrant workshop, with two invited talk by Thierry Dana-Picard (Jerusalem College of Technology, Jerusalem, Israel) and Yoni Zohar (Bar Ilan University, Tel Aviv, Israel) and four contributions. An open call for papers was then issued, and attracted seven submissions. Those submissions have been accepted by our reviewers, who jointly produced at least three careful reports on each of the contributions. The resulting revised papers are collected in the present volume. The contributions in this volume are a faithful representation of the wide spectrum of ThEdu, ranging from those more focused on the automated deduction research, not losing track of the possible applications in an educational setting, to those focused on the applications, in educational settings, of automated deduction tools and methods. We, the volume editors, hope that this collection of papers will further promote the development of theorem-proving based software, and that it will allow to improve the mutual understanding between computer scientists, mathematicians and stakeholders in education. While this volume goes to press, the next edition of the ThEdu workshop is being prepared: ThEdu'23 will be a satellite event of the 29th international Conference on Automated Deduction (CADE 2023), July 1-4, 2023, Rome, Italy. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | true | 350,435 |
2403.15268 | Awakening Augmented Generation: Learning to Awaken Internal Knowledge of
Large Language Models for Question Answering | Retrieval-Augmented-Generation and Generation-Augmented-Generation have been proposed to enhance the knowledge required for question answering with Large Language Models (LLMs) by leveraging richer context. However, the former relies on external resources, and both require incorporating explicit documents into the context, which increases execution costs and susceptibility to noise data during inference. Recent works indicate that LLMs model rich knowledge, but it is often not effectively activated and awakened. Inspired by this, we propose a novel knowledge-augmented framework, $\textbf{Awakening-Augmented-Generation}$ (AAG), which mimics the human ability to answer questions using only thinking and recalling to compensate for knowledge gaps, thereby awaking relevant knowledge in LLMs without relying on external resources. AAG consists of two key components for awakening richer context. Explicit awakening fine-tunes a context generator to create a synthetic, compressed document that functions as symbolic context. Implicit awakening utilizes a hypernetwork to generate adapters based on the question and synthetic document, which are inserted into LLMs to serve as parameter context. Experimental results on three datasets demonstrate that AAG exhibits significant advantages in both open-domain and closed-book settings, as well as in out-of-distribution generalization. Our code will be available at \url{https://github.com/Xnhyacinth/IAG}. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 440,475 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.