id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1412.2684
HyperSpectral classification with adaptively weighted L1-norm regularization and spatial postprocessing
Sparse regression methods have been proven effective in a wide range of signal processing problems such as image compression, speech coding, channel equalization, linear regression and classification. In this paper a new convex method of hyperspectral image classification is developed based on the sparse unmixing algorithm SUnSAL for which a pixel adaptive L1-norm regularization term is introduced. To further enhance class separability, the algorithm is kernelized using an RBF kernel and the final results are improved by a combination of spatial pre and post-processing operations. It is shown that the proposed method is competitive with state of the art algorithms such as SVM-CK, KSOMP-CK and KSSP-CK.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
38,224
2410.11655
Retrieval Augmented Spelling Correction for E-Commerce Applications
The rapid introduction of new brand names into everyday language poses a unique challenge for e-commerce spelling correction services, which must distinguish genuine misspellings from novel brand names that use unconventional spelling. We seek to address this challenge via Retrieval Augmented Generation (RAG). On this approach, product names are retrieved from a catalog and incorporated into the context used by a large language model (LLM) that has been fine-tuned to do contextual spelling correction. Through quantitative evaluation and qualitative error analyses, we find improvements in spelling correction utilizing the RAG framework beyond a stand-alone LLM. We also demonstrate the value of additional finetuning of the LLM to incorporate retrieved context.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
498,660
1912.04453
Enhancing Learnability of classification algorithms using simple data preprocessing in fMRI scans of Alzheimer's disease
Alzheimer's Disease (AD) is the most common type of dementia. In all leading countries, it is one of the primary reasons of death in senior citizens. Currently, it is diagnosed by calculating the MSME score and by the manual study of MRI Scan. Also, different machine learning methods are utilized for automatic diagnosis but existing has some limitations in terms of accuracy. In this paper, we have proposed some novel preprocessing techniques that have significantly increased the accuracy and at the same time decreased the training time of various classification algorithms. First, we have converted the ADNI dataset which was in 4D format into 2D form. We have also mitigated the computation costs by reducing the parameters of the input dataset while preserving important and relevant data. We have achieved this by using different preprocessing steps like grayscale image conversion, Histogram equalization and selective clipping of dataset. We observed a highest accuracy of 97.52% and a sensitivity of 97.6% in our testing dataset.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
156,841
2406.03496
Wings: Learning Multimodal LLMs without Text-only Forgetting
Multimodal large language models (MLLMs), initiated with a trained LLM, first align images with text and then fine-tune on multimodal mixed inputs. However, the MLLM catastrophically forgets the text-only instructions, which do not include images and can be addressed within the initial LLM. In this paper, we present Wings, a novel MLLM that excels in both text-only dialogues and multimodal comprehension. Analyzing MLLM attention in multimodal instructions reveals that text-only forgetting is related to the attention shifts from pre-image to post-image text. From that, we construct extra modules that act as the boosted learner to compensate for the attention shift. The complementary visual and textual learners, like "wings" on either side, are connected in parallel within each layer's attention block. Initially, image and text inputs are aligned with visual learners operating alongside the main attention, balancing focus on visual elements. Textual learners are later collaboratively integrated with attention-based routing to blend the outputs of the visual and textual learners. We design the Low-Rank Residual Attention (LoRRA) to guarantee high efficiency for learners. Our experimental results demonstrate that Wings outperforms equally-scaled MLLMs in both text-only and visual question-answering tasks. On a newly constructed Interleaved Image-Text (IIT) benchmark, Wings exhibits superior performance from text-only-rich to multimodal-rich question-answering tasks.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
461,265
2010.10980
Enhanced User Grouping and Power Allocation for Hybrid mmWave MIMO-NOMA systems
Non-orthogonal multiple access (NOMA) and millimeter wave (mmWave) are two key enabling technologies for the fifth-generation (5G) mobile networks and beyond. In this paper, we consider uplink communications with a hybrid beamforming structure and focus on improving the spectral efficiency (SE) and energy efficiency (EE) of mmWave multiple-input multiple-output (MIMO)-NOMA systems with enhanced user grouping and power allocation. Exploiting the directionality feature of mmWave channels, we first propose a novel initial agglomerative nesting (AGNES) based user grouping algorithm by taking advantage of the channel correlations. It is noted that the optimization of the SE/EE is a challenging task due to the non-linear programming nature of the corresponding problem involving user grouping, beam selection, and power allocation. Our idea is to decompose the overall optimization problem into a mixed integer problem comprising of user grouping and beam selection only, followed by a continuous problem involving power allocation and digital beamforming design. To avoid the prohibitively high complexity of the brute-force search approach, we propose two suboptimal low-complexity user grouping and beam selection schemes, the direct AGNES (DIR-AGNES) scheme and the successive AGNES (SUC-AGNES) scheme. We also introduce the quadratic transform (QT) to recast the non-convex power allocation optimization problem into a convex one subject to a minimum required data rate of each user. The continuous problem is solved by iteratively optimizing the power and the digital beamforming. Extensive simulation results have shown that our proposed mmWave-NOMA design outperforms the conventional orthogonal multiple access (OMA) scenario and the state-of-art NOMA schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
202,074
1710.00346
Robust Tuning Datasets for Statistical Machine Translation
We explore the idea of automatically crafting a tuning dataset for Statistical Machine Translation (SMT) that makes the hyper-parameters of the SMT system more robust with respect to some specific deficiencies of the parameter tuning algorithms. This is an under-explored research direction, which can allow better parameter tuning. In this paper, we achieve this goal by selecting a subset of the available sentence pairs, which are more suitable for specific combinations of optimizers, objective functions, and evaluation measures. We demonstrate the potential of the idea with the pairwise ranking optimization (PRO) optimizer, which is known to yield too short translations. We show that the learning problem can be alleviated by tuning on a subset of the development set, selected based on sentence length. In particular, using the longest 50% of the tuning sentences, we achieve two-fold tuning speedup, and improvements in BLEU score that rival those of alternatives, which fix BLEU+1's smoothing instead.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
81,849
1807.11459
Improving Transferability of Deep Neural Networks
Learning from small amounts of labeled data is a challenge in the area of deep learning. This is currently addressed by Transfer Learning where one learns the small data set as a transfer task from a larger source dataset. Transfer Learning can deliver higher accuracy if the hyperparameters and source dataset are chosen well. One of the important parameters is the learning rate for the layers of the neural network. We show through experiments on the ImageNet22k and Oxford Flowers datasets that improvements in accuracy in range of 127% can be obtained by proper choice of learning rates. We also show that the images/label parameter for a dataset can potentially be used to determine optimal learning rates for the layers to get the best overall accuracy. We additionally validate this method on a sample of real-world image classification tasks from a public visual recognition API.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
104,183
2501.07482
TiEBe: A Benchmark for Assessing the Current Knowledge of Large Language Models
In a rapidly evolving knowledge landscape and the increasing adoption of large language models, a need has emerged to keep these models continuously updated with current events. While existing benchmarks evaluate general factual recall, they often overlook two critical aspects: the ability of models to integrate evolving knowledge through continual learning and the significant regional disparities in their performance. To address these gaps, we introduce the Timely Events Benchmark (TiEBe), a dataset containing over 11,000 question-answer pairs focused on globally and regionally significant events. TiEBe leverages structured retrospective data from Wikipedia, enabling continuous updates to assess LLMs' knowledge of evolving global affairs and their understanding of events across different regions. Our benchmark demonstrates that LLMs exhibit substantial geographic disparities in factual recall, emphasizing the need for more balanced global knowledge representation. Furthermore, TiEBe serves as a tool for evaluating continual learning strategies, providing insights into models' ability to acquire new information without forgetting past knowledge.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
524,411
2212.00790
Sparsity Agnostic Depth Completion
We present a novel depth completion approach agnostic to the sparsity of depth points, that is very likely to vary in many practical applications. State-of-the-art approaches yield accurate results only when processing a specific density and distribution of input points, i.e. the one observed during training, narrowing their deployment in real use cases. On the contrary, our solution is robust to uneven distributions and extremely low densities never witnessed during training. Experimental results on standard indoor and outdoor benchmarks highlight the robustness of our framework, achieving accuracy comparable to state-of-the-art methods when tested with density and distribution equal to the training one while being much more accurate in the other cases. Our pretrained models and further material are available in our project page.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
334,197
2309.15601
A Study on Tiny YOLO for Resource Constrained Xray Threat Detection
This paper implements and analyzes multiple networks with the goal of understanding their suitability for edge device applications such as X-ray threat detection. In this study, we use the state-of-the-art YOLO object detection model to solve this task of detecting threats in security baggage screening images. We designed and studied three models - Tiny YOLO, QCFS Tiny YOLO, and SNN Tiny YOLO. We utilize an alternative activation function calculated to have zero expected conversion error with the activation of a spiking activation function in our Tiny YOLOv7 model. This \textit{QCFS} version of the Tiny YOLO replicates the activation function from ultra-low latency and high-efficiency SNN architecture. It achieves state-of-the-art performance on CLCXray, an open-source X-ray threat Detection dataset. In addition, we also study the behavior of a Spiking Tiny YOLO on the same X-ray threat Detection dataset.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
395,035
2306.05705
On the Challenges and Perspectives of Foundation Models for Medical Image Analysis
This article discusses the opportunities, applications and future directions of large-scale pre-trained models, i.e., foundation models, for analyzing medical images. Medical foundation models have immense potential in solving a wide range of downstream tasks, as they can help to accelerate the development of accurate and robust models, reduce the large amounts of required labeled data, preserve the privacy and confidentiality of patient data. Specifically, we illustrate the "spectrum" of medical foundation models, ranging from general vision models, modality-specific models, to organ/task-specific models, highlighting their challenges, opportunities and applications. We also discuss how foundation models can be leveraged in downstream medical tasks to enhance the accuracy and efficiency of medical image analysis, leading to more precise diagnosis and treatment decisions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
372,304
2204.02752
Multi-Objective Evolutionary Beer Optimisation
Food production is a complex process which can benefit from many optimisation approaches. However, there is growing interest in methods that support customisation of food properties to satisfy individual consumer preferences. This paper addresses the personalisation of beer properties. Having identified components of the production process for craft beers whose production tends to be less standardised, we introduce a system which enables brewers to map the desired beer properties into ingredients dosage and combination. Previously explored approaches include direct use of structural equations as well as global machine learning methods. We introduce a framework which uses an evolutionary method supporting multi-objective optimisation. This work identifies problem-dependent objectives, their associations, and proposes a workflow to automate the discovery of multiple novel recipes based on user-defined criteria. The quality of the solutions generated by the multi-objective optimiser is compared against solutions from multiple runs of the method, and those of a single objective evolutionary technique. This comparison provides a road-map allowing the users to choose among more varied options or to fine-tune one of the favourite identified solution. The experiments presented here demonstrate the usability of the framework as well as the transparency of its criteria.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
290,075
2403.03635
Processing Load Allocation of On-Board Multi-User Detection for Payload-Constrained Satellite Networks
The rapid advance of mega-constellation facilitates the booming of direct-to-satellite massive access, where multi-user detection is critical to alleviate the induced inter-user interference. While centralized implementation of on-board detection induces unaffordable complexity for a single satellite, this paper proposes to allocate the processing load among cooperative satellites for finest exploitation of distributed processing power. Observing the inherent disparities among users, we first excavate the closed-form trade-offs between achievable sum-rate and the processing load corresponding to the satellite-user matchings, which leads to a system sum-rate maximization problem under stringent payload constraints. To address the non-trivial integer matching, we develop a quadratic transformation to the original problem, and prove it an equivalent conversion. The problem is further simplified into a series of subproblems employing successive lower bound approximation which obtains polynomial-time complexity and converges within a few iterations. Numerical results show remarkably complexity reduction compared with centralized processing, as well as around 20\% sum-rate gain compared with other allocation methods.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
435,276
2311.04261
Restoration of Analog Videos Using Swin-UNet
In this paper, we present a system to restore analog videos of historical archives. These videos often contain severe visual degradation due to the deterioration of their tape supports that require costly and slow manual interventions to recover the original content. The proposed system uses a multi-frame approach and is able to deal with severe tape mistracking, which results in completely scrambled frames. Tests on real-world videos from a major historical video archive show the effectiveness of our demo system. The code and the pre-trained model are publicly available at https://github.com/miccunifi/analog-video-restoration.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
406,169
2306.01951
GAD-NR: Graph Anomaly Detection via Neighborhood Reconstruction
Graph Anomaly Detection (GAD) is a technique used to identify abnormal nodes within graphs, finding applications in network security, fraud detection, social media spam detection, and various other domains. A common method for GAD is Graph Auto-Encoders (GAEs), which encode graph data into node representations and identify anomalies by assessing the reconstruction quality of the graphs based on these representations. However, existing GAE models are primarily optimized for direct link reconstruction, resulting in nodes connected in the graph being clustered in the latent space. As a result, they excel at detecting cluster-type structural anomalies but struggle with more complex structural anomalies that do not conform to clusters. To address this limitation, we propose a novel solution called GAD-NR, a new variant of GAE that incorporates neighborhood reconstruction for graph anomaly detection. GAD-NR aims to reconstruct the entire neighborhood of a node, encompassing the local structure, self-attributes, and neighbor attributes, based on the corresponding node representation. By comparing the neighborhood reconstruction loss between anomalous nodes and normal nodes, GAD-NR can effectively detect any anomalies. Extensive experimentation conducted on six real-world datasets validates the effectiveness of GAD-NR, showcasing significant improvements (by up to 30% in AUC) over state-of-the-art competitors. The source code for GAD-NR is openly available. Importantly, the comparative analysis reveals that the existing methods perform well only in detecting one or two types of anomalies out of the three types studied. In contrast, GAD-NR excels at detecting all three types of anomalies across the datasets, demonstrating its comprehensive anomaly detection capabilities.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
370,689
cs/0512002
On Self-Regulated Swarms, Societal Memory, Speed and Dynamics
We propose a Self-Regulated Swarm (SRS) algorithm which hybridizes the advantageous characteristics of Swarm Intelligence as the emergence of a societal environmental memory or cognitive map via collective pheromone laying in the landscape (properly balancing the exploration/exploitation nature of our dynamic search strategy), with a simple Evolutionary mechanism that trough a direct reproduction procedure linked to local environmental features is able to self-regulate the above exploratory swarm population, speeding it up globally. In order to test his adaptive response and robustness, we have recurred to different dynamic multimodal complex functions as well as to Dynamic Optimization Control problems, measuring reaction speeds and performance. Final comparisons were made with standard Genetic Algorithms (GAs), Bacterial Foraging strategies (BFOA), as well as with recent Co-Evolutionary approaches. SRS's were able to demonstrate quick adaptive responses, while outperforming the results obtained by the other approaches. Additionally, some successful behaviors were found. One of the most interesting illustrate that the present SRS collective swarm of bio-inspired ant-like agents is able to track about 65% of moving peaks traveling up to ten times faster than the velocity of a single individual composing that precise swarm tracking system.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
539,116
1805.00097
Syntactic Patterns Improve Information Extraction for Medical Search
Medical professionals search the published literature by specifying the type of patients, the medical intervention(s) and the outcome measure(s) of interest. In this paper we demonstrate how features encoding syntactic patterns improve the performance of state-of-the-art sequence tagging models (both linear and neural) for information extraction of these medically relevant categories. We present an analysis of the type of patterns exploited, and the semantic space induced for these, i.e., the distributed representations learned for identified multi-token patterns. We show that these learned representations differ substantially from those of the constituent unigrams, suggesting that the patterns capture contextual information that is otherwise lost.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
96,355
2407.10413
Melon Fruit Detection and Quality Assessment Using Generative AI-Based Image Data Augmentation
Monitoring and managing the growth and quality of fruits are very important tasks. To effectively train deep learning models like YOLO for real-time fruit detection, high-quality image datasets are essential. However, such datasets are often lacking in agriculture. Generative AI models can help create high-quality images. In this study, we used MidJourney and Firefly tools to generate images of melon greenhouses and post-harvest fruits through text-to-image, pre-harvest image-to-image, and post-harvest image-to-image methods. We evaluated these AIgenerated images using PSNR and SSIM metrics and tested the detection performance of the YOLOv9 model. We also assessed the net quality of real and generated fruits. Our results showed that generative AI could produce images very similar to real ones, especially for post-harvest fruits. The YOLOv9 model detected the generated images well, and the net quality was also measurable. This shows that generative AI can create realistic images useful for fruit detection and quality assessment, indicating its great potential in agriculture. This study highlights the potential of AI-generated images for data augmentation in melon fruit detection and quality assessment and envisions a positive future for generative AI applications in agriculture.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
472,976
2501.05593
Bounds on Box Codes
Let $n_q(M,d)$ be the minimum length of a $q$-ary code of size $M$ and minimum distance $d$. Bounding $n_q(M,d)$ is a fundamental problem that lies at the heart of coding theory. This work considers a generalization $n^\bx_q(M,d)$ of $n_q(M,d)$ corresponding to codes in which codewords have \emph{protected} and \emph{unprotected} entries; where (analogs of) distance and of length are measured with respect to protected entries only. Such codes, here referred to as \emph{box codes}, have seen prior studies in the context of bipartite graph covering. Upper and lower bounds on $n^\bx_q(M,d)$ are presented.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
523,651
2310.00750
Identifying Copeland Winners in Dueling Bandits with Indifferences
We consider the task of identifying the Copeland winner(s) in a dueling bandits problem with ternary feedback. This is an underexplored but practically relevant variant of the conventional dueling bandits problem, in which, in addition to strict preference between two arms, one may observe feedback in the form of an indifference. We provide a lower bound on the sample complexity for any learning algorithm finding the Copeland winner(s) with a fixed error probability. Moreover, we propose POCOWISTA, an algorithm with a sample complexity that almost matches this lower bound, and which shows excellent empirical performance, even for the conventional dueling bandits problem. For the case where the preference probabilities satisfy a specific type of stochastic transitivity, we provide a refined version with an improved worst case sample complexity.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
396,128
2011.01809
Kinodynamic Motion Planning for Multi-Legged Robot Jumping via Mixed-Integer Convex Program
This paper proposes a kinodynamic motion planning framework for multi-legged robot jumping based on the mixed-integer convex program (MICP), which simultaneously reasons about centroidal motion, contact points, wrench, and gait sequences. This method uniquely combines configuration space discretization and the construction of feasible wrench polytope (FWP) to encode kinematic constraints, actuator limit, friction cone constraint, and gait sequencing into a single MICP. The MICP could be efficiently solved to the global optimum by off-the-shelf numerical solvers and provide highly dynamic jumping motions without requiring initial guesses. Simulation and experimental results demonstrate that the proposed method could find novel and dexterous maneuvers that are directly deployable on the two-legged robot platform to traverse through challenging terrains.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
204,714
2403.09270
A Deep Reinforcement Learning Approach for Autonomous Reconfigurable Intelligent Surfaces
A reconfigurable intelligent surface (RIS) is a prospective wireless technology that enhances wireless channel quality. An RIS is often equipped with passive array of elements and provides cost and power-efficient solutions for coverage extension of wireless communication systems. Without any radio frequency (RF) chains or computing resources, however, the RIS requires control information to be sent to it from an external unit, e.g., a base station (BS). The control information can be delivered by wired or wireless channels, and the BS must be aware of the RIS and the RIS-related channel conditions in order to effectively configure its behavior. Recent works have introduced hybrid RIS structures possessing a few active elements that can sense and digitally process received data. Here, we propose the operation of an entirely autonomous RIS that operates without a control link between the RIS and BS. Using a few sensing elements, the autonomous RIS employs a deep Q network (DQN) based on reinforcement learning in order to enhance the sum rate of the network. Our results illustrate the potential of deploying autonomous RISs in wireless networks with essentially no network overhead.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
437,699
2404.05210
Bidirectional Long-Range Parser for Sequential Data Understanding
The transformer is a powerful data modelling framework responsible for remarkable performance on a wide range of tasks. However, they are limited in terms of scalability as it is suboptimal and inefficient to process long-sequence data. To this purpose we introduce BLRP (Bidirectional Long-Range Parser), a novel and versatile attention mechanism designed to increase performance and efficiency on long-sequence tasks. It leverages short and long range heuristics in the form of a local sliding window approach combined with a global bidirectional latent space synthesis technique. We show the benefits and versatility of our approach on vision and language domains by demonstrating competitive results against state-of-the-art methods on the Long-Range-Arena and CIFAR benchmarks together with ablations demonstrating the computational efficiency.
false
false
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
444,984
2101.02594
On Satisficing in Quantitative Games
Several problems in planning and reactive synthesis can be reduced to the analysis of two-player quantitative graph games. {\em Optimization} is one form of analysis. We argue that in many cases it may be better to replace the optimization problem with the {\em satisficing problem}, where instead of searching for optimal solutions, the goal is to search for solutions that adhere to a given threshold bound. This work defines and investigates the satisficing problem on a two-player graph game with the discounted-sum cost model. We show that while the satisficing problem can be solved using numerical methods just like the optimization problem, this approach does not render compelling benefits over optimization. When the discount factor is, however, an integer, we present another approach to satisficing, which is purely based on automata methods. We show that this approach is algorithmically more performant -- both theoretically and empirically -- and demonstrates the broader applicability of satisficing overoptimization.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
214,679
2103.00784
LiTAMIN2: Ultra Light LiDAR-based SLAM using Geometric Approximation applied with KL-Divergence
In this paper, a three-dimensional light detection and ranging simultaneous localization and mapping (SLAM) method is proposed that is available for tracking and mapping with 500--1000 Hz processing. The proposed method significantly reduces the number of points used for point cloud registration using a novel ICP metric to speed up the registration process while maintaining accuracy. Point cloud registration with ICP is less accurate when the number of points is reduced because ICP basically minimizes the distance between points. To avoid this problem, symmetric KL-divergence is introduced to the ICP cost that reflects the difference between two probabilistic distributions. The cost includes not only the distance between points but also differences between distribution shapes. The experimental results on the KITTI dataset indicate that the proposed method has high computational efficiency, strongly outperforms other methods, and has similar accuracy to the state-of-the-art SLAM method.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
222,399
2308.13499
MRNAV: Multi-Robot Aware Planning and Control Stack for Collision and Deadlock-free Navigation in Cluttered Environments
Multi-robot collision-free and deadlock-free navigation in cluttered environments with static and dynamic obstacles is a fundamental problem for many applications. We introduce MRNAV, a framework for planning and control to effectively navigate in such environments. Our design utilizes short, medium, and long horizon decision making modules with qualitatively different properties, and defines the responsibilities of them. The decision making modules complement each other and provide the effective navigation capability. MRNAV is the first hierarchical approach combining these three levels of decision making modules and explicitly defining their responsibilities. We implement our design for simulated multi-quadrotor flight. In our evaluations, we show that all three modules are required for effective navigation in diverse situations. We show the long-term executability of our approach in an eight hour long wall time (six hour long simulation time) uninterrupted simulation without collisions or deadlocks.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
387,943
2401.13945
General Automatic Solution Generation of Social Problems
Given the escalating intricacy and multifaceted nature of contemporary social systems, manually generating solutions to address pertinent social issues has become a formidable task. In response to this challenge, the rapid development of artificial intelligence has spurred the exploration of computational methodologies aimed at automatically generating solutions. However, current methods for auto-generation of solutions mainly concentrate on local social regulations that pertain to specific scenarios. Here, we report an automatic social operating system (ASOS) designed for general social solution generation, which is built upon agent-based models, enabling both global and local analyses and regulations of social problems across spatial and temporal dimensions. ASOS adopts a hypergraph with extensible social semantics for a comprehensive and structured representation of social dynamics. It also incorporates a generalized protocol for standardized hypergraph operations and a symbolic hybrid framework that delivers interpretable solutions, yielding a balance between regulatory efficacy and function viability. To demonstrate the effectiveness of ASOS, we apply it to the domain of averting extreme events within international oil futures markets. By generating a new trading role supplemented by new mechanisms, ASOS can adeptly discern precarious market conditions and make front-running interventions for non-profit purposes. This study demonstrates that ASOS provides an efficient and systematic approach for generating solutions for enhancing our society.
false
true
false
false
true
false
false
false
false
false
false
false
false
true
true
false
false
false
423,909
1904.04403
Discovering Bands from Graphs
Discovering the underlying structure of a given graph is one of the fundamental goals in graph mining. Given a graph, we can often order vertices in a way that neighboring vertices have a higher probability of being connected to each other. This implies that the edges form a band around the diagonal in the adjacency matrix. Such structure may rise for example if the graph was created over time: each vertex had an active time interval during which the vertex was connected with other active vertices. The goal of this paper is to model this phenomenon. To this end, we formulate an optimization problem: given a graph and an integer $K$, we want to order graph vertices and partition the ordered adjacency matrix into $K$ bands such that bands closer to the diagonal are more dense. We measure the goodness of a segmentation using the log-likelihood of a log-linear model, a flexible family of distributions containing many standard distributions. We divide the problem into two subproblems: finding the order and finding the bands. We show that discovering bands can be done in polynomial time with isotonic regression, and we also introduce a heuristic iterative approach. For discovering the order we use Fiedler order accompanied with a simple combinatorial refinement. We demonstrate empirically that our heuristic works well in practice.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
127,025
2212.04247
EditableNeRF: Editing Topologically Varying Neural Radiance Fields by Key Points
Neural radiance fields (NeRF) achieve highly photo-realistic novel-view synthesis, but it's a challenging problem to edit the scenes modeled by NeRF-based methods, especially for dynamic scenes. We propose editable neural radiance fields that enable end-users to easily edit dynamic scenes and even support topological changes. Input with an image sequence from a single camera, our network is trained fully automatically and models topologically varying dynamics using our picked-out surface key points. Then end-users can edit the scene by easily dragging the key points to desired new positions. To achieve this, we propose a scene analysis method to detect and initialize key points by considering the dynamics in the scene, and a weighted key points strategy to model topologically varying dynamics by joint key points and weights optimization. Our method supports intuitive multi-dimensional (up to 3D) editing and can generate novel scenes that are unseen in the input sequence. Experiments demonstrate that our method achieves high-quality editing on various dynamic scenes and outperforms the state-of-the-art. Our code and captured data are available at https://chengwei-zheng.github.io/EditableNeRF/.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
335,377
1605.04475
Capturing divergence in dependency trees to improve syntactic projection
Obtaining syntactic parses is a crucial part of many NLP pipelines. However, most of the world's languages do not have large amounts of syntactically annotated corpora available for building parsers. Syntactic projection techniques attempt to address this issue by using parallel corpora consisting of resource-poor and resource-rich language pairs, taking advantage of a parser for the resource-rich language and word alignment between the languages to project the parses onto the data for the resource-poor language. These projection methods can suffer, however, when the two languages are divergent. In this paper, we investigate the possibility of using small, parallel, annotated corpora to automatically detect divergent structural patterns between two languages. These patterns can then be used to improve structural projection algorithms, allowing for better performing NLP tools for resource-poor languages, in particular those that may not have large amounts of annotated data necessary for traditional, fully-supervised methods. While this detection process is not exhaustive, we demonstrate that common patterns of divergence can be identified automatically without prior knowledge of a given language pair, and the patterns can be used to improve performance of projection algorithms.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
55,871
2409.17293
A two-scale computational homogenization approach for elastoplastic truss-based lattice structures
The revolutionary advancements in metal additive manufacturing have enabled the production of alloy-based lattice structures with complex geometrical features and high resolutions. This has encouraged the development of nonlinear material models, including plasticity, damage, etc., for such materials. However, the prohibitive computational cost arising from the high number of degrees of freedom for engineering structures composed of lattice structures highlights the necessity of homogenization techniques, such as the two-scale computational homogenization method. In the present work, a two-scale homogenization approach with on-the-fly exchange of information is adopted to study the elastoplastic behavior of truss-based lattice structures. The macroscopic homogenized structure is represented by a two-dimensional continuum, while the underlying microscale lattices are modeled as a network of one-dimensional truss elements. This helps to significantly reduce the associated computational cost by reducing the microscopic degrees of freedom. The microscale trusses are assumed to exhibit an elastoplastic material behavior characterized by a combination of nonlinear exponential isotropic hardening and linear kinematic hardening. Through multiple numerical examples, the performance of the adopted homogenization approach is examined by comparing forces and displacements with direct numerical simulations of discrete structures for three types of stretching-dominated lattice topologies, including triangular, X-braced and X-Plus-braced unit cells. Furthermore, the principle of scale separation, which emphasizes the need for an adequate separation between the macroscopic and microscopic characteristic lengths, is investigated.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
491,726
2406.13342
ZeroDL: Zero-shot Distribution Learning for Text Clustering via Large Language Models
The recent advancements in large language models (LLMs) have brought significant progress in solving NLP tasks. Notably, in-context learning (ICL) is the key enabling mechanism for LLMs to understand specific tasks and grasping nuances. In this paper, we propose a simple yet effective method to contextualize a task toward a specific LLM, by (1) observing how a given LLM describes (all or a part of) target datasets, i.e., open-ended zero-shot inference, and (2) aggregating the open-ended inference results by the LLM, and (3) finally incorporate the aggregated meta-information for the actual task. We show the effectiveness of this approach in text clustering tasks, and also highlight the importance of the contextualization through examples of the above procedure.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
465,808
2302.02255
Human-Imperceptible Identification with Learnable Lensless Imaging
Lensless imaging protects visual privacy by capturing heavily blurred images that are imperceptible for humans to recognize the subject but contain enough information for machines to infer information. Unfortunately, protecting visual privacy comes with a reduction in recognition accuracy and vice versa. We propose a learnable lensless imaging framework that protects visual privacy while maintaining recognition accuracy. To make captured images imperceptible to humans, we designed several loss functions based on total variation, invertibility, and the restricted isometry property. We studied the effect of privacy protection with blurriness on the identification of personal identity via a quantitative method based on a subjective evaluation. Moreover, we validate our simulation by implementing a hardware realization of lensless imaging with photo-lithographically printed masks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
343,924
2011.04151
"What Do You Mean by That?" A Parser-Independent Interactive Approach for Enhancing Text-to-SQL
In Natural Language Interfaces to Databases systems, the text-to-SQL technique allows users to query databases by using natural language questions. Though significant progress in this area has been made recently, most parsers may fall short when they are deployed in real systems. One main reason stems from the difficulty of fully understanding the users' natural language questions. In this paper, we include human in the loop and present a novel parser-independent interactive approach (PIIA) that interacts with users using multi-choice questions and can easily work with arbitrary parsers. Experiments were conducted on two cross-domain datasets, the WikiSQL and the more complex Spider, with five state-of-the-art parsers. These demonstrated that PIIA is capable of enhancing the text-to-SQL performance with limited interaction turns by using both simulation and human evaluation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
205,483
2209.12351
Deep Reinforcement Learning for Adaptive Mesh Refinement
Finite element discretizations of problems in computational physics often rely on adaptive mesh refinement (AMR) to preferentially resolve regions containing important features during simulation. However, these spatial refinement strategies are often heuristic and rely on domain-specific knowledge or trial-and-error. We treat the process of adaptive mesh refinement as a local, sequential decision-making problem under incomplete information, formulating AMR as a partially observable Markov decision process. Using a deep reinforcement learning approach, we train policy networks for AMR strategy directly from numerical simulation. The training process does not require an exact solution or a high-fidelity ground truth to the partial differential equation at hand, nor does it require a pre-computed training dataset. The local nature of our reinforcement learning formulation allows the policy network to be trained inexpensively on much smaller problems than those on which they are deployed. The methodology is not specific to any particular partial differential equation, problem dimension, or numerical discretization, and can flexibly incorporate diverse problem physics. To that end, we apply the approach to a diverse set of partial differential equations, using a variety of high-order discontinuous Galerkin and hybridizable discontinuous Galerkin finite element discretizations. We show that the resultant deep reinforcement learning policies are competitive with common AMR heuristics, generalize well across problem classes, and strike a favorable balance between accuracy and cost such that they often lead to a higher accuracy per problem degree of freedom.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
319,502
2307.01917
Stranding Risk for Underactuated Vessels in Complex Ocean Currents: Analysis and Controllers
Low-propulsion vessels can take advantage of powerful ocean currents to navigate towards a destination. Recent results demonstrated that vessels can reach their destination with high probability despite forecast errors. However, these results do not consider the critical aspect of safety of such vessels: because of their low propulsion which is much smaller than the magnitude of currents, they might end up in currents that inevitably push them into unsafe areas such as shallow areas, garbage patches, and shipping lanes. In this work, we first investigate the risk of stranding for free-floating vessels in the Northeast Pacific. We find that at least 5.04% would strand within 90 days. Next, we encode the unsafe sets as hard constraints into Hamilton-Jacobi Multi-Time Reachability (HJ-MTR) to synthesize a feedback policy that is equivalent to re-planning at each time step at low computational cost. While applying this policy closed-loop guarantees safe operation when the currents are known, in realistic situations only imperfect forecasts are available. We demonstrate the safety of our approach in such realistic situations empirically with large-scale simulations of a vessel navigating in high-risk regions in the Northeast Pacific. We find that applying our policy closed-loop with daily re-planning on new forecasts can ensure safety with high probability even under forecast errors that exceed the maximal propulsion. Our method significantly improves safety over the baselines and still achieves a timely arrival of the vessel at the destination.
false
false
false
false
true
false
false
true
false
false
true
false
false
false
false
false
false
false
377,510
2004.03665
Interval Observers for Simultaneous State and Model Estimation of Partially Known Nonlinear Systems
We study the problem of designing interval-valued observers that simultaneously estimate the system state and learn an unknown dynamic model for partially unknown nonlinear systems with dynamic unknown inputs and bounded noise signals. Leveraging affine abstraction methods and the existence of nonlinear decomposition functions, as well as applying our previously developed data-driven function over-approximation/abstraction approach to over-estimate the unknown dynamic model, our proposed observer recursively computes the maximal and minimal elements of the estimate intervals that are proven to contain the true augmented states. Then, using observed output/measurement signals, the observer iteratively shrinks the intervals by eliminating estimates that are not compatible with the measurements. Finally, given new interval estimates, the observer updates the over-approximation of the unknown model dynamics. Moreover, we provide sufficient conditions for uniform boundedness of the sequence of estimate interval widths, i.e., stability of the designed observer, in the form of tractable (mixed-)integer programs with finitely countable feasible sets.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
171,636
2408.05426
SAM-FNet: SAM-Guided Fusion Network for Laryngo-Pharyngeal Tumor Detection
Laryngo-pharyngeal cancer (LPC) is a highly fatal malignant disease affecting the head and neck region. Previous studies on endoscopic tumor detection, particularly those leveraging dual-branch network architectures, have shown significant advancements in tumor detection. These studies highlight the potential of dual-branch networks in improving diagnostic accuracy by effectively integrating global and local (lesion) feature extraction. However, they are still limited in their capabilities to accurately locate the lesion region and capture the discriminative feature information between the global and local branches. To address these issues, we propose a novel SAM-guided fusion network (SAM-FNet), a dual-branch network for laryngo-pharyngeal tumor detection. By leveraging the powerful object segmentation capabilities of the Segment Anything Model (SAM), we introduce the SAM into the SAM-FNet to accurately segment the lesion region. Furthermore, we propose a GAN-like feature optimization (GFO) module to capture the discriminative features between the global and local branches, enhancing the fusion feature complementarity. Additionally, we collect two LPC datasets from the First Affiliated Hospital (FAHSYSU) and the Sixth Affiliated Hospital (SAHSYSU) of Sun Yat-sen University. The FAHSYSU dataset is used as the internal dataset for training the model, while the SAHSYSU dataset is used as the external dataset for evaluating the model's performance. Extensive experiments on both datasets of FAHSYSU and SAHSYSU demonstrate that the SAM-FNet can achieve competitive results, outperforming the state-of-the-art counterparts. The source code of SAM-FNet is available at the URL of https://github.com/VVJia/SAM-FNet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
479,777
1909.02314
Commonsense Reasoning Using WordNet and SUMO: a Detailed Analysis
We describe a detailed analysis of a sample of large benchmark of commonsense reasoning problems that has been automatically obtained from WordNet, SUMO and their mapping. The objective is to provide a better assessment of the quality of both the benchmark and the involved knowledge resources for advanced commonsense reasoning tasks. By means of this analysis, we are able to detect some knowledge misalignments, mapping errors and lack of knowledge and resources. Our final objective is the extraction of some guidelines towards a better exploitation of this commonsense knowledge framework by the improvement of the included resources.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
144,163
2209.08785
A Robust Distributed Model Predictive Control Framework for Consensus of Multi-Agent Systems with Input Constraints and Varying Delays
This paper studies the consensus problem of general linear discrete-time multi-agent systems (MAS) with input constraints and bounded time-varying communication delays. We propose a robust distributed model predictive control (DMPC) consensus protocol that integrates the offline consensus design with online DMPC optimization to exploit their respective advantages. More precisely, each agent is equipped with an offline consensus protocol, which is a priori designed, depending on its immediate neighbors' estimated states. Further, the estimation errors propagated over time due to inexact neighboring information are proved bounded under mild technical assumptions, based on which a robust DMPC strategy is deliberately designed to achieve robust consensus while satisfying input constraints. Moreover, it is shown that, with the suitably designed cost function and constraints, the feasibility of the associated optimization problem can be recursively ensured. We further provide the consensus convergence result of the constrained MAS in the presence of bounded varying delays. Finally, two numerical examples are given to verify the effectiveness of the proposed distributed consensus algorithm.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
318,268
1810.02068
Towards Fast and Energy-Efficient Binarized Neural Network Inference on FPGA
Binarized Neural Network (BNN) removes bitwidth redundancy in classical CNN by using a single bit (-1/+1) for network parameters and intermediate representations, which has greatly reduced the off-chip data transfer and storage overhead. However, a large amount of computation redundancy still exists in BNN inference. By analyzing local properties of images and the learned BNN kernel weights, we observe an average of $\sim$78% input similarity and $\sim$59% weight similarity among weight kernels, measured by our proposed metric in common network architectures. Thus there does exist redundancy that can be exploited to further reduce the amount of on-chip computations. Motivated by the observation, in this paper, we proposed two types of fast and energy-efficient architectures for BNN inference. We also provide analysis and insights to pick the better strategy of these two for different datasets and network models. By reusing the results from previous computation, much cycles for data buffer access and computations can be skipped. By experiments, we demonstrate that 80% of the computation and 40% of the buffer access can be skipped by exploiting BNN similarity. Thus, our design can achieve 17% reduction in total power consumption, 54% reduction in on-chip power consumption and 2.4$\times$ maximum speedup, compared to the baseline without applying our reuse technique. Our design also shows 1.9$\times$ more area-efficiency compared to state-of-the-art BNN inference design. We believe our deployment of BNN on FPGA leads to a promising future of running deep learning models on mobile devices.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
true
109,527
2304.03388
EZClone: Improving DNN Model Extraction Attack via Shape Distillation from GPU Execution Profiles
Deep Neural Networks (DNNs) have become ubiquitous due to their performance on prediction and classification problems. However, they face a variety of threats as their usage spreads. Model extraction attacks, which steal DNNs, endanger intellectual property, data privacy, and security. Previous research has shown that system-level side-channels can be used to leak the architecture of a victim DNN, exacerbating these risks. We propose two DNN architecture extraction techniques catering to various threat models. The first technique uses a malicious, dynamically linked version of PyTorch to expose a victim DNN architecture through the PyTorch profiler. The second, called EZClone, exploits aggregate (rather than time-series) GPU profiles as a side-channel to predict DNN architecture, employing a simple approach and assuming little adversary capability as compared to previous work. We investigate the effectiveness of EZClone when minimizing the complexity of the attack, when applied to pruned models, and when applied across GPUs. We find that EZClone correctly predicts DNN architectures for the entire set of PyTorch vision architectures with 100% accuracy. No other work has shown this degree of architecture prediction accuracy with the same adversarial constraints or using aggregate side-channel information. Prior work has shown that, once a DNN has been successfully cloned, further attacks such as model evasion or model inversion can be accelerated significantly.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
356,787
2109.09410
Background-Foreground Segmentation for Interior Sensing in Automotive Industry
To ensure safety in automated driving, the correct perception of the situation inside the car is as important as its environment. Thus, seat occupancy detection and classification of detected instances play an important role in interior sensing. By the knowledge of the seat occupancy status, it is possible to, e.g., automate the airbag deployment control. Furthermore, the presence of a driver, which is necessary for partially automated driving cars at the automation levels two to four can be verified. In this work, we compare different statistical methods from the field of image segmentation to approach the problem of background-foreground segmentation in camera based interior sensing. In the recent years, several methods based on different techniques have been developed and applied to images or videos from different applications. The peculiarity of the given scenarios of interior sensing is, that the foreground instances and the background both contain static as well as dynamic elements. In data considered in this work, even the camera position is not completely fixed. We review and benchmark three different methods ranging, i.e., Gaussian Mixture Models (GMM), Morphological Snakes and a deep neural network, namely a Mask R-CNN. In particular, the limitations of the classical methods, GMM and Morphological Snakes, for interior sensing are shown. Furthermore, it turns, that it is possible to overcome these limitations by deep learning, e.g.\ using a Mask R-CNN. Although only a small amount of ground truth data was available for training, we enabled the Mask R-CNN to produce high quality background-foreground masks via transfer learning. Moreover, we demonstrate that certain augmentation as well as pre- and post-processing methods further enhance the performance of the investigated methods.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
256,270
1809.04329
Privacy-Utility Management of Hypothesis Tests
The trade-off of hypothesis tests on the correlated privacy hypothesis and utility hypothesis is studied. The error exponent of the Bayesian composite hypothesis test on the privacy or utility hypothesis can be characterized by the corresponding minimal Chernoff information rate. An optimal management protects the privacy by minimizing the error exponent of the privacy hypothesis test and meanwhile guarantees the utility hypothesis testing performance by satisfying a lower bound on the corresponding minimal Chernoff information rate. The asymptotic minimum error exponent of the privacy hypothesis test is shown to be characterized by the infimum of corresponding minimal Chernoff information rates subject to the utility guarantees.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
107,541
2211.09519
A Human-friendly Verbal Communication Platform for Multi-Robot Systems: Design and Principles
While multi-robot systems have been broadly researched and deployed, their success is built chiefly upon the dependency on network infrastructures, whether wired or wireless. Aiming at the first steps toward de-coupling the application of multi-robot systems from the reliance on network infrastructures, this paper proposes a human-friendly verbal communication platform for multi-robot systems, following the deliberately designed principles of being adaptable, transparent, and secure. The platform is network independent and is subsequently capable of functioning in network infrastructure lacking environments from underwater to planet explorations. A series of experiments were conducted to demonstrate the platform's capability in multi-robot systems communication and task coordination, showing its potential in infrastructure-free applications. To benefit the community, we have made the codes open source at https://github.com/jynxmagic/MSc_AI_project
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
331,003
1805.11698
Coded Computation Against Distributed Straggling Channel Decoders in the Cloud for Gaussian Uplink Channels
The uplink of a Cloud Radio Access Network (CRAN) architecture is studied, where decoding at the cloud takes place at distributed decoding processors. To mitigate the impact of straggling decoders in the cloud, the cloud re-encodes the received frames via a linear code before distributing them to the decoding processors. Focusing on Gaussian channels, and assuming the use of lattice codes at the users, in this paper the maximum user rate is derived such that all the servers can reliably recover the linear combinations of the messages corresponding to the employed linear code at the cloud. Furthermore, two analytical upper bounds on the frame error rate (FER) as a function of the decoding latency are developed, in order to quantify the performance of the cloud's linear code in terms of the tradeoff between FER and decoding latency at the cloud.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
98,976
2210.09277
Unsupervised Optimal Power Flow Using Graph Neural Networks
Optimal power flow (OPF) is a critical optimization problem that allocates power to the generators in order to satisfy the demand at a minimum cost. Solving this problem exactly is computationally infeasible in the general case. In this work, we propose to leverage graph signal processing and machine learning. More specifically, we use a graph neural network to learn a nonlinear parametrization between the power demanded and the corresponding allocation. We learn the solution in an unsupervised manner, minimizing the cost directly. In order to take into account the electrical constraints of the grid, we propose a novel barrier method that is differentiable and works on initially infeasible points. We show through simulations that the use of GNNs in this unsupervised learning context leads to solutions comparable to standard solvers while being computationally efficient and avoiding constraint violations most of the time.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
324,479
2008.11083
Using the discrete radon transformation for grayscale image moments
Image moments are weighted sums over pixel values in a given image and are used in object detection and localization. Raw image moments are derived directly from the image and are fundamental in deriving moment invariants quantities. The current general algorithm for raw image moments is computationally expensive and the number of multiplications needed scales with the number of pixels in the image. For an image of size (N,M), it has O(NM) multiplications. In this paper we outline an algorithm using the Discrete Radon Transformation for computing the raw image moments of a grayscale image. It reduces two dimensional moment calculations to linear combinations of one dimensional moment calculations. We show that the number of multiplications needed scales as O(N + M), making it faster then the most widely used algorithm of raw image moments.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
193,175
2210.01212
spred: Solving $L_1$ Penalty with SGD
We propose to minimize a generic differentiable objective with $L_1$ constraint using a simple reparametrization and straightforward stochastic gradient descent. Our proposal is the direct generalization of previous ideas that the $L_1$ penalty may be equivalent to a differentiable reparametrization with weight decay. We prove that the proposed method, \textit{spred}, is an exact differentiable solver of $L_1$ and that the reparametrization trick is completely ``benign" for a generic nonconvex function. Practically, we demonstrate the usefulness of the method in (1) training sparse neural networks to perform gene selection tasks, which involves finding relevant features in a very high dimensional space, and (2) neural network compression task, to which previous attempts at applying the $L_1$-penalty have been unsuccessful. Conceptually, our result bridges the gap between the sparsity in deep learning and conventional statistical learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
321,164
2307.06175
Learning Decentralized Partially Observable Mean Field Control for Artificial Collective Behavior
Recent reinforcement learning (RL) methods have achieved success in various domains. However, multi-agent RL (MARL) remains a challenge in terms of decentralization, partial observability and scalability to many agents. Meanwhile, collective behavior requires resolution of the aforementioned challenges, and remains of importance to many state-of-the-art applications such as active matter physics, self-organizing systems, opinion dynamics, and biological or robotic swarms. Here, MARL via mean field control (MFC) offers a potential solution to scalability, but fails to consider decentralized and partially observable systems. In this paper, we enable decentralized behavior of agents under partial information by proposing novel models for decentralized partially observable MFC (Dec-POMFC), a broad class of problems with permutation-invariant agents allowing for reduction to tractable single-agent Markov decision processes (MDP) with single-agent RL solution. We provide rigorous theoretical results, including a dynamic programming principle, together with optimality guarantees for Dec-POMFC solutions applied to finite swarms of interest. Algorithmically, we propose Dec-POMFC-based policy gradient methods for MARL via centralized training and decentralized execution, together with policy gradient approximation guarantees. In addition, we improve upon state-of-the-art histogram-based MFC by kernel methods, which is of separate interest also for fully observable MFC. We evaluate numerically on representative collective behavior tasks such as adapted Kuramoto and Vicsek swarming models, being on par with state-of-the-art MARL. Overall, our framework takes a step towards RL-based engineering of artificial collective behavior via MFC.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
378,991
2407.20337
Contrasting Deepfakes Diffusion via Contrastive Learning and Global-Local Similarities
Discerning between authentic content and that generated by advanced AI methods has become increasingly challenging. While previous research primarily addresses the detection of fake faces, the identification of generated natural images has only recently surfaced. This prompted the recent exploration of solutions that employ foundation vision-and-language models, like CLIP. However, the CLIP embedding space is optimized for global image-to-text alignment and is not inherently designed for deepfake detection, neglecting the potential benefits of tailored training and local image features. In this study, we propose CoDE (Contrastive Deepfake Embeddings), a novel embedding space specifically designed for deepfake detection. CoDE is trained via contrastive learning by additionally enforcing global-local similarities. To sustain the training of our model, we generate a comprehensive dataset that focuses on images generated by diffusion models and encompasses a collection of 9.2 million images produced by using four different generators. Experimental results demonstrate that CoDE achieves state-of-the-art accuracy on the newly collected dataset, while also showing excellent generalization capabilities to unseen image generators. Our source code, trained models, and collected dataset are publicly available at: https://github.com/aimagelab/CoDE.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
true
477,138
1801.07379
Secure Mobile Crowdsensing with Deep Learning
In order to stimulate secure sensing for Internet of Things (IoT) applications such as healthcare and traffic monitoring, mobile crowdsensing (MCS) systems have to address security threats, such as jamming, spoofing and faked sensing attacks, during both the sensing and the information exchange processes in large-scale dynamic and heterogenous networks. In this article, we investigate secure mobile crowdsensing and present how to use deep learning (DL) methods such as stacked autoencoder (SAE), deep neural network (DNN), and convolutional neural network (CNN) to improve the MCS security approaches including authentication, privacy protection, faked sensing countermeasures, intrusion detection and anti-jamming transmissions in MCS. We discuss the performance gain of these DL-based approaches compared with traditional security schemes and identify the challenges that need to be addressed to implement them in practical MCS systems.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
true
88,775
2408.16774
Optimal UCA Design for OAM Based Wireless Backhaul Transmission
Orbital angular momentum (OAM), which is considered as a novel way to achieve high capacity, has been attracted much attention recently. OAM signals emitted by uniform circular array (UCA) are widely regarded to go through the Bessel-form channels. However, the channel gains corresponding to the Bessel-form channels are with low signal-to-noise-ratio (SNR) on OAM-modes and it is difficult to achieve high capacity using all OAM modes. To achieve maximum capacity offered by OAM multiplexing for wireless backhaul communications, in this paper we propose the optimal UCA design, which selects the optimal OAM-modes and radius of receive UCA. We formulate the capacity maximization problem and divide it into two subproblems for obtaining the corresponding optimal UCA design for OAM multiplexing based wireless backhaul communications. In particular, the optimal radius of the receive UCA is firstly derived. Then, we propose an mode selection scheme to choose appropriate OAM-modes for data transmission to maximize the capacity. Extensive simulations obtained validate that the capacity of OAM multiplexing can be significantly increased with our developed scheme.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
484,450
2204.13474
Port-Hamiltonian Dynamic Mode Decomposition
We present a novel physics-informed system identification method to construct a passive linear time-invariant system. In more detail, for a given quadratic energy functional, measurements of the input, state, and output of a system in the time domain, we find a realization that approximates the data well while guaranteeing that the energy functional satisfies a dissipation inequality. To this end, we use the framework of port-Hamiltonian (pH) systems and modify the dynamic mode decomposition, respectively operator inference, to be feasible for continuous-time pH systems. We propose an iterative numerical method to solve the corresponding least-squares minimization problem. We construct an effective initialization of the algorithm by studying the least-squares problem in a weighted norm, for which we present the analytical minimum-norm solution. The efficiency of the proposed method is demonstrated with several numerical examples.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
293,829
2410.08081
Packing Analysis: Packing Is More Appropriate for Large Models or Datasets in Supervised Fine-tuning
Packing, initially utilized in the pre-training phase, is an optimization technique designed to maximize hardware resource efficiency by combining different training sequences to fit the model's maximum input length. Although it has demonstrated effectiveness during pre-training, there remains a lack of comprehensive analysis for the supervised fine-tuning (SFT) stage on the following points: (1) whether packing can effectively enhance training efficiency while maintaining performance, (2) the suitable size of the model and dataset for fine-tuning with the packing method, and (3) whether packing unrelated or related training samples might cause the model to either excessively disregard or over-rely on the context. In this paper, we perform extensive comparisons between SFT methods using padding and packing, covering SFT datasets ranging from 69K to 1.2M and models from 8B to 70B. This provides the first comprehensive analysis of the advantages and limitations of packing versus padding, as well as practical considerations for implementing packing in various training scenarios. Our analysis covers various benchmarks, including knowledge, reasoning, and coding, as well as GPT-based evaluations, time efficiency, and other fine-tuning parameters. We also open-source our code for fine-tuning and evaluation and provide checkpoints fine-tuned on datasets of different sizes, aiming to advance future research on packing methods. Code is available at: https://github.com/ShuheWang1998/Packing-Analysis?tab=readme-ov-file.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
496,939
1712.05726
Coverage Analysis and Load Balancing in HetNets with mmWave Multi-RAT Small Cells
We characterize a two tier heterogeneous network, consisting of classical sub-6GHz macro cells, and multi Radio Access Technology (RAT) small cells able to operate in sub-6GHz and millimeter-wave (mm-wave) bands. For optimizing coverage and to balance loads, we propose a two-step mechanism based on two biases for tuning the tier and RAT selection, where the sub-6GHz band is used to speed-up the initial access procedure in the mm-wave RAT. First, we investigate the effect of the biases in terms of signal to interference plus noise ratio (SINR) distribution, cell load, and user throughput. More specifically, we obtain the optimal biases that maximize either the SINR coverage or the user downlink throughput. Then, we characterize the cell load using the mean cell approach and derive upper bounds on the overloading probabilities. Finally, for a given traffic density, we provide the small cell density required to satisfy system constraints in terms of overloading and outage probabilities. Our analysis highlights the importance of deploying dual band small cells in particular when small cells are sparsely deployed or in case of heavy traffic.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
86,764
2403.10318
Anytime Neural Architecture Search on Tabular Data
The increasing demand for tabular data analysis calls for transitioning from manual architecture design to Neural Architecture Search (NAS). This transition demands an efficient and responsive anytime NAS approach that is capable of returning current optimal architectures within any given time budget while progressively enhancing architecture quality with increased budget allocation. However, the area of research on Anytime NAS for tabular data remains unexplored. To this end, we introduce ATLAS, the first anytime NAS approach tailored for tabular data. ATLAS introduces a novel two-phase filtering-and-refinement optimization scheme with joint optimization, combining the strengths of both paradigms of training-free and training-based architecture evaluation. Specifically, in the filtering phase, ATLAS employs a new zero-cost proxy specifically designed for tabular data to efficiently estimate the performance of candidate architectures, thereby obtaining a set of promising architectures. Subsequently, in the refinement phase, ATLAS leverages a fixed-budget search algorithm to schedule the training of the promising candidates, so as to accurately identify the optimal architecture. To jointly optimize the two phases for anytime NAS, we also devise a budget-aware coordinator that delivers high NAS performance within constraints. Experimental evaluations demonstrate that our ATLAS can obtain a good-performing architecture within any predefined time budget and return better architectures as and when a new time budget is made available. Overall, it reduces the search time on tabular data by up to 82.75x compared to existing NAS approaches.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
438,147
1706.03205
Item Silk Road: Recommending Items from Information Domains to Social Users
Online platforms can be divided into information-oriented and social-oriented domains. The former refers to forums or E-commerce sites that emphasize user-item interactions, like Trip.com and Amazon; whereas the latter refers to social networking services (SNSs) that have rich user-user connections, such as Facebook and Twitter. Despite their heterogeneity, these two domains can be bridged by a few overlapping users, dubbed as bridge users. In this work, we address the problem of cross-domain social recommendation, i.e., recommending relevant items of information domains to potential users of social networks. To our knowledge, this is a new problem that has rarely been studied before. Existing cross-domain recommender systems are unsuitable for this task since they have either focused on homogeneous information domains or assumed that users are fully overlapped. Towards this end, we present a novel Neural Social Collaborative Ranking (NSCR) approach, which seamlessly sews up the user-item interactions in information domains and user-user connections in SNSs. In the information domain part, the attributes of users and items are leveraged to strengthen the embedding learning of users and items. In the SNS part, the embeddings of bridge users are propagated to learn the embeddings of other non-bridge users. Extensive experiments on two real-world datasets demonstrate the effectiveness and rationality of our NSCR method.
false
false
false
true
true
true
false
false
false
false
false
false
false
false
false
false
false
false
75,118
cs/0611006
Evolving controllers for simulated car racing
This paper describes the evolution of controllers for racing a simulated radio-controlled car around a track, modelled on a real physical track. Five different controller architectures were compared, based on neural networks, force fields and action sequences. The controllers use either egocentric (first person), Newtonian (third person) or no information about the state of the car (open-loop controller). The only controller that was able to evolve good racing behaviour was based on a neural network acting on egocentric inputs.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
true
false
false
539,843
1704.02362
Fostering User Engagement: Rhetorical Devices for Applause Generation Learnt from TED Talks
One problem that every presenter faces when delivering a public discourse is how to hold the listeners' attentions or to keep them involved. Therefore, many studies in conversation analysis work on this issue and suggest qualitatively con-structions that can effectively lead to audience's applause. To investigate these proposals quantitatively, in this study we an-alyze the transcripts of 2,135 TED Talks, with a particular fo-cus on the rhetorical devices that are used by the presenters for applause elicitation. Through conducting regression anal-ysis, we identify and interpret 24 rhetorical devices as triggers of audience applauding. We further build models that can rec-ognize applause-evoking sentences and conclude this work with potential implications.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
71,432
2006.10619
Tensor Decompositions in Recursive Neural Networks for Tree-Structured Data
The paper introduces two new aggregation functions to encode structural knowledge from tree-structured data. They leverage the Canonical and Tensor-Train decompositions to yield expressive context aggregation while limiting the number of model parameters. Finally, we define two novel neural recursive models for trees leveraging such aggregation functions, and we test them on two tree classification tasks, showing the advantage of proposed models when tree outdegree increases.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
182,945
2404.11419
SLAIM: Robust Dense Neural SLAM for Online Tracking and Mapping
We present SLAIM - Simultaneous Localization and Implicit Mapping. We propose a novel coarse-to-fine tracking model tailored for Neural Radiance Field SLAM (NeRF-SLAM) to achieve state-of-the-art tracking performance. Notably, existing NeRF-SLAM systems consistently exhibit inferior tracking performance compared to traditional SLAM algorithms. NeRF-SLAM methods solve camera tracking via image alignment and photometric bundle-adjustment. Such optimization processes are difficult to optimize due to the narrow basin of attraction of the optimization loss in image space (local minima) and the lack of initial correspondences. We mitigate these limitations by implementing a Gaussian pyramid filter on top of NeRF, facilitating a coarse-to-fine tracking optimization strategy. Furthermore, NeRF systems encounter challenges in converging to the right geometry with limited input views. While prior approaches use a Signed-Distance Function (SDF)-based NeRF and directly supervise SDF values by approximating ground truth SDF through depth measurements, this often results in suboptimal geometry. In contrast, our method employs a volume density representation and introduces a novel KL regularizer on the ray termination distribution, constraining scene geometry to consist of empty space and opaque surfaces. Our solution implements both local and global bundle-adjustment to produce a robust (coarse-to-fine) and accurate (KL regularizer) SLAM solution. We conduct experiments on multiple datasets (ScanNet, TUM, Replica) showing state-of-the-art results in tracking and in reconstruction accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
447,490
2410.22559
Unpicking Data at the Seams: Understanding Disentanglement in VAEs
Disentanglement, or identifying statistically independent factors of the data, is relevant to much of machine learning, from controlled data generation and robust classification to efficient encoding and improving our understanding of the data itself. Disentanglement arises in several generative paradigms including Variational Autoencoders (VAEs), Generative Adversarial Networks and diffusion models. Recent progress has been made in understanding disentanglement in VAEs, where a choice of diagonal posterior covariance matrices is shown to promote mutual orthogonality between columns of the decoder's Jacobian. We build on this to show how such orthogonality, a geometric property, translates to disentanglement, a statistical property, furthering our understanding of how a VAE identifies independent components of, or disentangles, the data.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
503,671
2211.08872
McNet: Fuse Multiple Cues for Multichannel Speech Enhancement
In multichannel speech enhancement, both spectral and spatial information are vital for discriminating between speech and noise. How to fully exploit these two types of information and their temporal dynamics remains an interesting research problem. As a solution to this problem, this paper proposes a multi-cue fusion network named McNet, which cascades four modules to respectively exploit the full-band spatial, narrow-band spatial, sub-band spectral, and full-band spectral information. Experiments show that each module in the proposed network has its unique contribution and, as a whole, notably outperforms other state-of-the-art methods.
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
330,803
2103.02735
Fairness of Exposure in Stochastic Bandits
Contextual bandit algorithms have become widely used for recommendation in online systems (e.g. marketplaces, music streaming, news), where they now wield substantial influence on which items get exposed to the users. This raises questions of fairness to the items -- and to the sellers, artists, and writers that benefit from this exposure. We argue that the conventional bandit formulation can lead to an undesirable and unfair winner-takes-all allocation of exposure. To remedy this problem, we propose a new bandit objective that guarantees merit-based fairness of exposure to the items while optimizing utility to the users. We formulate fairness regret and reward regret in this setting, and present algorithms for both stochastic multi-armed bandits and stochastic linear bandits. We prove that the algorithms achieve sub-linear fairness regret and reward regret. Beyond the theoretical analysis, we also provide empirical evidence that these algorithms can fairly allocate exposure to different arms effectively.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
223,045
2001.00632
Academic Performance Estimation with Attention-based Graph Convolutional Networks
Student's academic performance prediction empowers educational technologies including academic trajectory and degree planning, course recommender systems, early warning and advising systems. Given a student's past data (such as grades in prior courses), the task of student's performance prediction is to predict a student's grades in future courses. Academic programs are structured in a way that prior courses lay the foundation for future courses. The knowledge required by courses is obtained by taking multiple prior courses, which exhibits complex relationships modeled by graph structures. Traditional methods for student's performance prediction usually neglect the underlying relationships between multiple courses; and how students acquire knowledge across them. In addition, traditional methods do not provide interpretation for predictions needed for decision making. In this work, we propose a novel attention-based graph convolutional networks model for student's performance prediction. We conduct extensive experiments on a real-world dataset obtained from a large public university. The experimental results show that our proposed model outperforms state-of-the-art approaches in terms of grade prediction. The proposed model also shows strong accuracy in identifying students who are at-risk of failing or dropping out so that timely intervention and feedback can be provided to the student.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
159,283
1912.12001
Achieving Arbitrary Throughput-Fairness Trade-offs in the Inter Cell Interference Coordination with Fixed Transmit Power Problem
We study the problem of inter cell interference coordination (ICIC) with fixed transmit power in OFDMA-based cellular networks, in which each base station (BS) needs to decide as to which subchannel, if any, to allocate to each of its associated mobile stations (MS) for data transmission. In general, there exists a trade-off between the total throughput (sum of throughputs of all the MSs) and fairness under the allocations found by resource allocation schemes. We introduce the concept of $\tau-\alpha-$fairness by modifying the concept of $\alpha-$fairness, which was earlier proposed in the context of designing fair end-to-end window-based congestion control protocols for packet-switched networks. The concept of $\tau-\alpha-$fairness allows us to achieve arbitrary trade-offs between the total throughput and degree of fairness by selecting an appropriate value of $\alpha$ in $[0,\infty)$. We show that for every $\alpha \in [0,\infty)$ and every $\tau > 0$, the problem of finding a $\tau-\alpha-$fair allocation is NP-Complete. Further, we show that for every $\alpha \in [0, \infty)$, there exist thresholds such that if the potential interference levels experienced by each MS on every subchannel are above the threshold values, then the problem can be optimally solved in polynomial time by reducing it to the bipartite graph matching problem. Also, we propose a simple, distributed subchannel allocation algorithm for the ICIC problem, which is flexible, requires a small amount of time to operate, and requires information exchange among only neighboring BSs. We investigate via simulations as to how the algorithm parameters should be selected so as to achieve any desired trade-off between the total throughput and fairness.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
158,727
2012.09897
Treatment Targeting by AUUC Maximization with Generalization Guarantees
We consider the task of optimizing treatment assignment based on individual treatment effect prediction. This task is found in many applications such as personalized medicine or targeted advertising and has gained a surge of interest in recent years under the name of Uplift Modeling. It consists in targeting treatment to the individuals for whom it would be the most beneficial. In real life scenarios, when we do not have access to ground-truth individual treatment effect, the capacity of models to do so is generally measured by the Area Under the Uplift Curve (AUUC), a metric that differs from the learning objectives of most of the Individual Treatment Effect (ITE) models. We argue that the learning of these models could inadvertently degrade AUUC and lead to suboptimal treatment assignment. To tackle this issue, we propose a generalization bound on the AUUC and present a novel learning algorithm that optimizes a derivable surrogate of this bound, called AUUC-max. Finally, we empirically demonstrate the tightness of this generalization bound, its effectiveness for hyper-parameter tuning and show the efficiency of the proposed algorithm compared to a wide range of competitive baselines on two classical benchmarks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
212,197
2110.09516
Keep it Tighter -- A Story on Analytical Mean Embeddings
Kernel techniques are among the most popular and flexible approaches in data science allowing to represent probability measures without loss of information under mild conditions. The resulting mapping called mean embedding gives rise to a divergence measure referred to as maximum mean discrepancy (MMD) with existing quadratic-time estimators (w.r.t. the sample size) and known convergence properties for bounded kernels. In this paper we focus on the problem of MMD estimation when the mean embedding of one of the underlying distributions is available analytically. Particularly, we consider distributions on the real line (motivated by financial applications) and prove tighter concentration for the proposed estimator under this semi-explicit setting; we also extend the result to the case of unbounded (exponential) kernel with minimax-optimal lower bounds. We demonstrate the efficiency of our approach beyond synthetic example in three real-world examples relying on one-dimensional random variables: index replication and calibration on loss-given-default ratios and on S&P 500 data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
261,840
1205.4933
Compressed Sensing on the Image of Bilinear Maps
For several communication models, the dispersive part of a communication channel is described by a bilinear operation $T$ between the possible sets of input signals and channel parameters. The received channel output has then to be identified from the image $T(X,Y)$ of the input signal difference sets $X$ and the channel state sets $Y$. The main goal in this contribution is to characterize the compressibility of $T(X,Y)$ with respect to an ambient dimension $N$. In this paper we show that a restricted norm multiplicativity of $T$ on all canonical subspaces $X$ and $Y$ with dimension $S$ resp. $F$ is sufficient for the reconstruction of output signals with an overwhelming probability from $\mathcal{O}((S+F)\log N)$ random sub-Gaussian measurements.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
16,130
2308.11079
Long-Term Prediction of Natural Video Sequences with Robust Video Predictors
Predicting high dimensional video sequences is a curiously difficult problem. The number of possible futures for a given video sequence grows exponentially over time due to uncertainty. This is especially evident when trying to predict complicated natural video scenes from a limited snapshot of the world. The inherent uncertainty accumulates the further into the future you predict making long-term prediction very difficult. In this work we introduce a number of improvements to existing work that aid in creating Robust Video Predictors (RoViPs). We show that with a combination of deep Perceptual and uncertainty-based reconstruction losses we are able to create high quality short-term predictions. Attention-based skip connections are utilised to allow for long range spatial movement of input features to further improve performance. Finally, we show that by simply making the predictor robust to its own prediction errors, it is possible to produce very long, realistic natural video sequences using an iterated single-step prediction task.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
386,994
2410.08032
Strategic Classification With Externalities
We propose a new variant of the strategic classification problem: a principal reveals a classifier, and $n$ agents report their (possibly manipulated) features to be classified. Motivated by real-world applications, our model crucially allows the manipulation of one agent to affect another; that is, it explicitly captures inter-agent externalities. The principal-agent interactions are formally modeled as a Stackelberg game, with the resulting agent manipulation dynamics captured as a simultaneous game. We show that under certain assumptions, the pure Nash Equilibrium of this agent manipulation game is unique and can be efficiently computed. Leveraging this result, PAC learning guarantees are established for the learner: informally, we show that it is possible to learn classifiers that minimize loss on the distribution, even when a random number of agents are manipulating their way to a pure Nash Equilibrium. We also comment on the optimization of such classifiers through gradient-based approaches. This work sets the theoretical foundations for a more realistic analysis of classifiers that are robust against multiple strategic actors interacting in a common environment.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
false
false
true
496,916
1402.3364
Metric tree-like structures in real-life networks: an empirical study
Based on solid theoretical foundations, we present strong evidences that a number of real-life networks, taken from different domains like Internet measurements, biological data, web graphs, social and collaboration networks, exhibit tree-like structures from a metric point of view. We investigate few graph parameters, namely, the tree-distortion and the tree-stretch, the tree-length and the tree-breadth, the Gromov's hyperbolicity, the cluster-diameter and the cluster-radius in a layering partition of a graph, which capture and quantify this phenomenon of being metrically close to a tree. By bringing all those parameters together, we not only provide efficient means for detecting such metric tree-like structures in large-scale networks but also show how such structures can be used, for example, to efficiently and compactly encode approximate distance and almost shortest path information and to fast and accurately estimate diameters and radii of those networks. Estimating the diameter and the radius of a graph or distances between its arbitrary vertices are fundamental primitives in many data and graph mining algorithms.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
30,867
2011.03114
Uncertainty-Aware Vehicle Orientation Estimation for Joint Detection-Prediction Models
Object detection is a critical component of a self-driving system, tasked with inferring the current states of the surrounding traffic actors. While there exist a number of studies on the problem of inferring the position and shape of vehicle actors, understanding actors' orientation remains a challenge for existing state-of-the-art detectors. Orientation is an important property for downstream modules of an autonomous system, particularly relevant for motion prediction of stationary or reversing actors where current approaches struggle. We focus on this task and present a method that extends the existing models that perform joint object detection and motion prediction, allowing us to more accurately infer vehicle orientations. In addition, the approach is able to quantify prediction uncertainty, outputting the probability that the inferred orientation is flipped, which allows for improved motion prediction and safer autonomous operations. Empirical results show the benefits of the approach, obtaining state-of-the-art performance on the open-sourced nuScenes data set.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
205,136
2407.11865
Novel Hybrid Integrated Pix2Pix and WGAN Model with Gradient Penalty for Binary Images Denoising
This paper introduces a novel approach to image denoising that leverages the advantages of Generative Adversarial Networks (GANs). Specifically, we propose a model that combines elements of the Pix2Pix model and the Wasserstein GAN (WGAN) with Gradient Penalty (WGAN-GP). This hybrid framework seeks to capitalize on the denoising capabilities of conditional GANs, as demonstrated in the Pix2Pix model, while mitigating the need for an exhaustive search for optimal hyperparameters that could potentially ruin the stability of the learning process. In the proposed method, the GAN's generator is employed to produce denoised images, harnessing the power of a conditional GAN for noise reduction. Simultaneously, the implementation of the Lipschitz continuity constraint during updates, as featured in WGAN-GP, aids in reducing susceptibility to mode collapse. This innovative design allows the proposed model to benefit from the strong points of both Pix2Pix and WGAN-GP, generating superior denoising results while ensuring training stability. Drawing on previous work on image-to-image translation and GAN stabilization techniques, the proposed research highlights the potential of GANs as a general-purpose solution for denoising. The paper details the development and testing of this model, showcasing its effectiveness through numerical experiments. The dataset was created by adding synthetic noise to clean images. Numerical results based on real-world dataset validation underscore the efficacy of this approach in image-denoising tasks, exhibiting significant enhancements over traditional techniques. Notably, the proposed model demonstrates strong generalization capabilities, performing effectively even when trained with synthetic noise.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
473,653
2410.18952
Dynamic Vocabulary Pruning in Early-Exit LLMs
Increasing the size of large language models (LLMs) has been shown to lead to better performance. However, this comes at the cost of slower and more expensive inference. Early-exiting is a promising approach for improving the efficiency of LLM inference by enabling next token prediction at intermediate layers. Yet, the large vocabulary size in modern LLMs makes the confidence estimation required for exit decisions computationally expensive, diminishing the efficiency gains. To address this, we propose dynamically pruning the vocabulary at test time for each token. Specifically, the vocabulary is pruned at one of the initial layers, and the smaller vocabulary is then used throughout the rest of the forward pass. Our experiments demonstrate that such post-hoc dynamic vocabulary pruning improves the efficiency of confidence estimation in early-exit LLMs while maintaining competitive performance.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
502,098
1912.12836
Supermodeling of tumor dynamics with parallel isogeometric analysis solver
Supermodeling is a modern, model-ensembling paradigm that integrates several self-synchronized imperfect sub-models by controlling a few meta-parameters to generate more accurate predictions of complex systems' dynamics. Continual synchronization between sub-models allows for trajectory predictions with superior accuracy compared to a single model or a classical ensemble of independent models whose decision fusion is based on the majority voting or averaging the outcomes. However, it comes out from numerous observations that the supermodeling procedure's convergence depends on a few principal factors such as (1) the number of sub-models, (2) their proper selection, and (3) the choice of the convergent optimization procedure, which assimilates the supermodel meta-parameters to data. Herein, we focus on modeling the evolution of the system described by a set of PDEs. We prove that supermodeling is conditionally convergent to a fixed-point attractor regarding only the supermodel meta-parameters. We investigate the formal conditions of the convergence of the supermodeling scheme theoretically. We employ the Banach fixed point theorem for the supermodeling correction operator, updating the synchronization constants' values iteratively. The "nudging" of the supermodel to the ground truth should be well balanced because both too small and too large attraction to data cause the supermodel desynchronization. The time-step size can control the convergence of the training procedure, by balancing the Lipshitz continuity constant of the PDE operator. All the sub-models have to be close to the ground-truth along the training trajectory but still sufficiently diverse to explore the phase space better. As an example, we discuss the three-dimensional supermodel of tumor evolution to demonstrate the supermodel's perfect fit to artificial data generated based on real medical images.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
158,946
1711.04992
Feature importance scores and lossless feature pruning using Banzhaf power indices
Understanding the influence of features in machine learning is crucial to interpreting models and selecting the best features for classification. In this work we propose the use of principles from coalitional game theory to reason about importance of features. In particular, we propose the use of the Banzhaf power index as a measure of influence of features on the outcome of a classifier. We show that features having Banzhaf power index of zero can be losslessly pruned without damage to classifier accuracy. Computing the power indices does not require having access to data samples. However, if samples are available, the indices can be empirically estimated. We compute Banzhaf power indices for a neural network classifier on real-life data, and compare the results with gradient-based feature saliency, and coefficients of a logistic regression model with $L_1$ regularization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
84,475
2407.03190
Cutting through the noise to motivate people: A comprehensive analysis of COVID-19 social media posts de/motivating vaccination
The COVID-19 pandemic exposed significant weaknesses in the healthcare information system. The overwhelming volume of misinformation on social media and other socioeconomic factors created extraordinary challenges to motivate people to take proper precautions and get vaccinated. In this context, our work explored a novel direction by analyzing an extensive dataset collected over two years, identifying the topics de/motivating the public about COVID-19 vaccination. We analyzed these topics based on time, geographic location, and political orientation. We noticed that while the motivating topics remain the same over time and geographic location, the demotivating topics change rapidly. We also identified that intrinsic motivation, rather than external mandate, is more advantageous to inspire the public. This study addresses scientific communication and public motivation in social media. It can help public health officials, policymakers, and social media platforms develop more effective messaging strategies to cut through the noise of misinformation and educate the public about scientific findings.
false
false
false
true
false
false
true
false
true
false
false
false
false
true
false
false
false
false
470,064
2002.03246
SPA: Verbal Interactions between Agents and Avatars in Shared Virtual Environments using Propositional Planning
We present a novel approach for generating plausible verbal interactions between virtual human-like agents and user avatars in shared virtual environments. Sense-Plan-Ask, or SPA, extends prior work in propositional planning and natural language processing to enable agents to plan with uncertain information, and leverage question and answer dialogue with other agents and avatars to obtain the needed information and complete their goals. The agents are additionally able to respond to questions from the avatars and other agents using natural-language enabling real-time multi-agent multi-avatar communication environments. Our algorithm can simulate tens of virtual agents at interactive rates interacting, moving, communicating, planning, and replanning. We find that our algorithm creates a small runtime cost and enables agents to complete their goals more effectively than agents without the ability to leverage natural-language communication. We demonstrate quantitative results on a set of simulated benchmarks and detail the results of a preliminary user-study conducted to evaluate the plausibility of the virtual interactions generated by SPA. Overall, we find that participants prefer SPA to prior techniques in 84\% of responses including significant benefits in terms of the plausibility of natural-language interactions and the positive impact of those interactions.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
163,197
2007.10938
Reference study of CityGML software support: the GeoBIM benchmark 2019 -- Part II
OGC CityGML is an open standard for 3D city models intended to foster interoperability and support various applications. However, through our practical experience and discussions with practitioners, we have noticed several problems related to the implementation of the standard and the use of standardized data. Nevertheless, a systematic investigation of these issues has never been performed, and there is thus insufficient evidence that can be used for tackling the problems. The GeoBIM benchmark project is aimed at finding such evidence by involving external volunteers, reporting on tools behaviour about relevant aspects (geometry, semantics, georeferencing, functionalities), analysed and described in this paper. This study explicitly pointed out the critical points embedded in the format as an evidence base for future development. This paper is in tandem with Part I, describing the results of the benchmark related to IFC, counterpart of CityGML within building information modelling.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
188,422
1211.6821
Additive-State-Decomposition Dynamic Inversion Stabilized Control for a Class of Uncertain MIMO Systems
This paper presents a new control, namely additive-state-decomposition dynamic inversion stabilized control, that is used to stabilize a class of multi-input multi-output (MIMO) systems subject to nonparametric time-varying uncertainties with respect to both state and input. By additive state decomposition and a new definition of output, the considered uncertain system is transformed into a minimum-phase uncertainty-free system with relative degree one, in which all uncertainties are lumped into a new disturbance at the output. Subsequently, dynamic inversion control is applied to reject the lumped disturbance. Performance analysis of the resulting closed-loop dynamics shows that the stability can be ensured. Finally, to demonstrate its effectiveness, the proposed control is applied to two existing problems by numerical simulation. Furthermore, in order to show its practicability, the proposed control is also performed on a real quadrotor to stabilize its attitude when its inertia moment matrix is subject to a large uncertainty.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
20,008
1811.03773
A Fully Automated System for Sizing Nasal PAP Masks Using Facial Photographs
We present a fully automated system for sizing nasal Positive Airway Pressure (PAP) masks. The system is comprised of a mix of HOG object detectors as well as multiple convolutional neural network stages for facial landmark detection. The models were trained using samples from the publicly available PUT and MUCT datasets while transfer learning was also employed to improve the performance of the models on facial photographs of actual PAP mask users. The fully automated system demonstrated an overall accuracy of 64.71% in correctly selecting the appropriate mask size and 86.1% accuracy sizing within 1 mask size.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
112,927
2305.18495
Hardware-aware Training Techniques for Improving Robustness of Ex-Situ Neural Network Transfer onto Passive TiO2 ReRAM Crossbars
Passive resistive random access memory (ReRAM) crossbar arrays, a promising emerging technology used for analog matrix-vector multiplications, are far superior to their active (1T1R) counterparts in terms of the integration density. However, current transfers of neural network weights into the conductance state of the memory devices in the crossbar architecture are accompanied by significant losses in precision due to hardware variabilities such as sneak path currents, biasing scheme effects and conductance tuning imprecision. In this work, training approaches that adapt techniques such as dropout, the reparametrization trick and regularization to TiO2 crossbar variabilities are proposed in order to generate models that are better adapted to their hardware transfers. The viability of this approach is demonstrated by comparing the outputs and precision of the proposed hardware-aware network with those of a regular fully connected network over a few thousand weight transfers using the half moons dataset in a simulation based on experimental data. For the neural network trained using the proposed hardware-aware method, 79.5% of the test set's data points can be classified with an accuracy of 95% or higher, while only 18.5% of the test set's data points can be classified with this accuracy by the regularly trained neural network.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
369,109
2312.09912
Reliable Probabilistic Classification with Neural Networks
Venn Prediction (VP) is a new machine learning framework for producing well-calibrated probabilistic predictions. In particular it provides well-calibrated lower and upper bounds for the conditional probability of an example belonging to each possible class of the problem at hand. This paper proposes five VP methods based on Neural Networks (NNs), which is one of the most widely used machine learning techniques. The proposed methods are evaluated experimentally on four benchmark datasets and the obtained results demonstrate the empirical well-calibratedness of their outputs and their superiority over the outputs of the traditional NN classifier.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
415,929
2407.19790
Hashing based Contrastive Learning for Virtual Screening
Virtual screening (VS) is a critical step in computer-aided drug discovery, aiming to identify molecules that bind to a specific target receptor like protein. Traditional VS methods, such as docking, are often too time-consuming for screening large-scale molecular databases. Recent advances in deep learning have demonstrated that learning vector representations for both proteins and molecules using contrastive learning can outperform traditional docking methods. However, given that target databases often contain billions of molecules, real-valued vector representations adopted by existing methods can still incur significant memory and time costs in VS. To address this problem, in this paper we propose a hashing-based contrastive learning method, called DrugHash, for VS. DrugHash treats VS as a retrieval task that uses efficient binary hash codes for retrieval. In particular, DrugHash designs a simple yet effective hashing strategy to enable end-to-end learning of binary hash codes for both protein and molecule modalities, which can dramatically reduce the memory and time costs with higher accuracy compared with existing methods. Experimental results show that DrugHash can outperform existing methods to achieve state-of-the-art accuracy, with a memory saving of 32$\times$ and a speed improvement of 3.5$\times$.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
476,927
2502.00078
Deep Ensembling with Multimodal Image Fusion for Efficient Classification of Lung Cancer
This study focuses on the classification of cancerous and healthy slices from multimodal lung images. The data used in the research comprises Computed Tomography (CT) and Positron Emission Tomography (PET) images. The proposed strategy achieves the fusion of PET and CT images by utilizing Principal Component Analysis (PCA) and an Autoencoder. Subsequently, a new ensemble-based classifier developed, Deep Ensembled Multimodal Fusion (DEMF), employing majority voting to classify the sample images under examination. Gradient-weighted Class Activation Mapping (Grad-CAM) employed to visualize the classification accuracy of cancer-affected images. Given the limited sample size, a random image augmentation strategy employed during the training phase. The DEMF network helps mitigate the challenges of scarce data in computer-aided medical image analysis. The proposed network compared with state-of-the-art networks across three publicly available datasets. The network outperforms others based on the metrics - Accuracy, F1-Score, Precision, and Recall. The investigation results highlight the effectiveness of the proposed network.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
529,209
2310.16121
19 Parameters Is All You Need: Tiny Neural Networks for Particle Physics
As particle accelerators increase their collision rates, and deep learning solutions prove their viability, there is a growing need for lightweight and fast neural network architectures for low-latency tasks such as triggering. We examine the potential of one recent Lorentz- and permutation-symmetric architecture, PELICAN, and present its instances with as few as 19 trainable parameters that outperform generic architectures with tens of thousands of parameters when compared on the binary classification task of top quark jet tagging.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
402,584
1910.00352
Proximal Policy Optimization for Improved Convergence in IRGAN
IRGAN is an information retrieval (IR) modeling approach that uses a theoretical minimax game between a generative and a discriminative model to iteratively optimize both of them, hence unifying the generative and discriminative approaches. Despite significant performance improvements in several information retrieval tasks, IRGAN training is an unstable process, and the solution varies largely with the random parameter initialization. In this work, we present an improved training objective based on proximal policy optimization objective and Gumbel-Softmax based sampling for the generator. We also propose a modified training algorithm which takes a single gradient update on both the generator as well as discriminator for each iteration step. We present empirical evidence of the improved convergence of the proposed model over the original IRGAN and a comparison on three different IR tasks on benchmark datasets is also discussed, emphasizing the proposed model's superior performance.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
147,642
2303.14007
'Team-in-the-loop': Ostrom's IAD framework 'rules in use' to map and measure contextual impacts of AI
This article explores how the 'rules in use' from Ostrom's Institutional Analysis and Development Framework (IAD) can be developed as a context analysis approach for AI. AI risk assessment frameworks increasingly highlight the need to understand existing contexts. However, these approaches do not frequently connect with established institutional analysis scholarship. We outline a novel direction illustrated through a high-level example to understand how clinical oversight is potentially impacted by AI. Much current thinking regarding oversight for AI revolves around the idea of decision makers being in-the-loop and, thus, having capacity to intervene to prevent harm. However, our analysis finds that oversight is complex, frequently made by teams of professionals and relies upon explanation to elicit information. Professional bodies and liability also function as institutions of polycentric oversight. These are all impacted by the challenge of oversight of AI systems. The approach outlined has potential utility as a policy tool of context analysis aligned with the 'Govern and Map' functions of the National Institute of Standards and Technology (NIST) AI Risk Management Framework; however, further empirical research is needed. Our analysis illustrates the benefit of existing institutional analysis approaches in foregrounding team structures within oversight and, thus, in conceptions of 'human in the loop'.
true
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
353,917
1805.03162
Polite Dialogue Generation Without Parallel Data
Stylistic dialogue response generation, with valuable applications in personality-based conversational agents, is a challenging task because the response needs to be fluent, contextually-relevant, as well as paralinguistically accurate. Moreover, parallel datasets for regular-to-stylistic pairs are usually unavailable. We present three weakly-supervised models that can generate diverse polite (or rude) dialogue responses without parallel data. Our late fusion model (Fusion) merges the decoder of an encoder-attention-decoder dialogue model with a language model trained on stand-alone polite utterances. Our label-fine-tuning (LFT) model prepends to each source sequence a politeness-score scaled label (predicted by our state-of-the-art politeness classifier) during training, and at test time is able to generate polite, neutral, and rude responses by simply scaling the label embedding by the corresponding score. Our reinforcement learning model (Polite-RL) encourages politeness generation by assigning rewards proportional to the politeness classifier score of the sampled response. We also present two retrieval-based polite dialogue model baselines. Human evaluation validates that while the Fusion and the retrieval-based models achieve politeness with poorer context-relevance, the LFT and Polite-RL models can produce significantly more polite responses without sacrificing dialogue quality.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
96,997
2304.04108
High Speed Neuromorphic Vision-Based Inspection of Countersinks in Automated Manufacturing Processes
Countersink inspection is crucial in various automated assembly lines, especially in the aerospace and automotive sectors. Advancements in machine vision introduced automated robotic inspection of countersinks using laser scanners and monocular cameras. Nevertheless, the aforementioned sensing pipelines require the robot to pause on each hole for inspection due to high latency and measurement uncertainties with motion, leading to prolonged execution times of the inspection task. The neuromorphic vision sensor, on the other hand, has the potential to expedite the countersink inspection process, but the unorthodox output of the neuromorphic technology prohibits utilizing traditional image processing techniques. Therefore, novel event-based perception algorithms need to be introduced. We propose a countersink detection approach on the basis of event-based motion compensation and the mean-shift clustering principle. In addition, our framework presents a robust event-based circle detection algorithm to precisely estimate the depth of the countersink specimens. The proposed approach expedites the inspection process by a factor of 10$\times$ compared to conventional countersink inspection methods. The work in this paper was validated for over 50 trials on three countersink workpiece variants. The experimental results show that our method provides a precision of 0.025 mm for countersink depth inspection despite the low resolution of commercially available neuromorphic cameras.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
357,078
1012.1552
Bridging the Gap between Reinforcement Learning and Knowledge Representation: A Logical Off- and On-Policy Framework
Knowledge Representation is important issue in reinforcement learning. In this paper, we bridge the gap between reinforcement learning and knowledge representation, by providing a rich knowledge representation framework, based on normal logic programs with answer set semantics, that is capable of solving model-free reinforcement learning problems for more complex do-mains and exploits the domain-specific knowledge. We prove the correctness of our approach. We show that the complexity of finding an offline and online policy for a model-free reinforcement learning problem in our approach is NP-complete. Moreover, we show that any model-free reinforcement learning problem in MDP environment can be encoded as a SAT problem. The importance of that is model-free reinforcement
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
8,445
cmp-lg/9608016
A Sign-Based Phrase Structure Grammar for Turkish
This study analyses Turkish syntax from an informational point of view. Sign based linguistic representation and principles of HPSG (Head-driven Phrase Structure Grammar) theory are adapted to Turkish. The basic informational elements are nested and inherently sorted feature structures called signs. In the implementation, logic programming tool ALE (Attribute Logic Engine) which is primarily designed for implementing HPSG grammars is used. A type and structure hierarchy of Turkish language is designed. Syntactic phenomena such a s subcategorization, relative clauses, constituent order variation, adjuncts, nomina l predicates and complement-modifier relations in Turkish are analyzed. A parser is designed and implemented in ALE.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,651
2011.00481
Fault Tolerant Control of Multirotor UAV for Piloted Outdoor Flights
This paper aims to develop a Fault Tolerant Control (FTC) architecture, for the case of a damaged actuator for a multirotor UAV that can be applied across multirotor platforms based on their Attainable Virtual Control Set (AVCS). The research is aimed to study the AVCS and identify the parameters that limit the controllability of multirotor UAV post an actuator failure. Based on the study of controllability, the requirements for a FTC is laid out. The implemented control solution will be tested on a quadrotor, Intel Shooting Star UAV platform in indoor and outdoor flights using only the onboard sensors. The attitude control solution is implemented with reduced attitude control, and the control allocation is performed with pseudo-inverse based model inversion with sequential desaturation to ensure tilt priority. The model is identified with an offline Ordinary Least Squares routine and subsequently updated with the Recursive Least Squares method. An offline calibration routine is implemented to correct IMU offset distance from the centre of rotation to correct for accelerometer bias caused by the high-speed spin after failure in a quadrotor.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
204,238
2410.23620
Identifiability Guarantees for Causal Disentanglement from Purely Observational Data
Causal disentanglement aims to learn about latent causal factors behind data, holding the promise to augment existing representation learning methods in terms of interpretability and extrapolation. Recent advances establish identifiability results assuming that interventions on (single) latent factors are available; however, it remains debatable whether such assumptions are reasonable due to the inherent nature of intervening on latent variables. Accordingly, we reconsider the fundamentals and ask what can be learned using just observational data. We provide a precise characterization of latent factors that can be identified in nonlinear causal models with additive Gaussian noise and linear mixing, without any interventions or graphical restrictions. In particular, we show that the causal variables can be identified up to a layer-wise transformation and that further disentanglement is not possible. We transform these theoretical results into a practical algorithm consisting of solving a quadratic program over the score estimation of the observed data. We provide simulation results to support our theoretical guarantees and demonstrate that our algorithm can derive meaningful causal representations from purely observational data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
504,107
2303.15570
Online Non-Destructive Moisture Content Estimation of Filter Media During Drying Using Artificial Neural Networks
Moisture content (MC) estimation is important in the manufacturing process of drying bulky filter media products as it is the prerequisite for drying optimization. In this study, a dataset collected by performing 161 drying industrial experiments is described and a methodology for MC estimation in an non-destructive and online manner during industrial drying is presented. An artificial neural network (ANN) based method is compared to state-of-the-art MC estimation methods reported in the literature. Results of model fitting and training show that a three-layer Perceptron achieves the lowest error. Experimental results show that ANNs combined with oven settings data, drying time and product temperature can be used to reliably estimate the MC of bulky filter media products.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
354,538
2308.10209
Fairness-aware Competitive Bidding Influence Maximization in Social Networks
Competitive Influence Maximization (CIM) has been studied for years due to its wide application in many domains. Most current studies primarily focus on the micro-level optimization by designing policies for one competitor to defeat its opponents. Furthermore, current studies ignore the fact that many influential nodes have their own starting prices, which may lead to inefficient budget allocation. In this paper, we propose a novel Competitive Bidding Influence Maximization (CBIM) problem, where the competitors allocate budgets to bid for the seeds attributed to the platform during multiple bidding rounds. To solve the CBIM problem, we propose a Fairness-aware Multi-agent Competitive Bidding Influence Maximization (FMCBIM) framework. In this framework, we present a Multi-agent Bidding Particle Environment (MBE) to model the competitors' interactions, and design a starting price adjustment mechanism to model the dynamic bidding environment. Moreover, we put forward a novel Multi-agent Competitive Bidding Influence Maximization (MCBIM) algorithm to optimize competitors' bidding policies. Extensive experiments on five datasets show that our work has good efficiency and effectiveness.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
386,628
2301.10416
AI vs. Human -- Differentiation Analysis of Scientific Content Generation
Recent neural language models have taken a significant step forward in producing remarkably controllable, fluent, and grammatical text. Although studies have found that AI-generated text is not distinguishable from human-written text for crowd-sourcing workers, there still exist errors in AI-generated text which are even subtler and harder to spot. We primarily focus on the scenario in which scientific AI writing assistant is deeply involved. First, we construct a feature description framework to distinguish between AI-generated text and human-written text from syntax, semantics, and pragmatics based on the human evaluation. Then we utilize the features, i.e., writing style, coherence, consistency, and argument logistics, from the proposed framework to analyze two types of content. Finally, we adopt several publicly available methods to investigate the gap of between AI-generated scientific text and human-written scientific text by AI-generated scientific text detection models. The results suggest that while AI has the potential to generate scientific content that is as accurate as human-written content, there is still a gap in terms of depth and overall quality. The AI-generated scientific content is more likely to contain errors in factual issues. We find that there exists a "writing style" gap between AI-generated scientific text and human-written scientific text. Based on the analysis result, we summarize a series of model-agnostic and distribution-agnostic features for detection tasks in other domains. Findings in this paper contribute to guiding the optimization of AI models to produce high-quality content and addressing related ethical and security concerns.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
341,797
2310.15468
Empowering Distributed Solutions in Renewable Energy Systems and Grid Optimization
This study delves into the shift from centralized to decentralized approaches in the electricity industry, with a particular focus on how machine learning (ML) advancements play a crucial role in empowering renewable energy sources and improving grid management. ML models have become increasingly important in predicting renewable energy generation and consumption, utilizing various techniques like artificial neural networks, support vector machines, and decision trees. Furthermore, data preprocessing methods, such as data splitting, normalization, decomposition, and discretization, are employed to enhance prediction accuracy. The incorporation of big data and ML into smart grids offers several advantages, including heightened energy efficiency, more effective responses to demand, and better integration of renewable energy sources. Nevertheless, challenges like handling large data volumes, ensuring cybersecurity, and obtaining specialized expertise must be addressed. The research investigates various ML applications within the realms of solar energy, wind energy, and electric distribution and storage, illustrating their potential to optimize energy systems. To sum up, this research demonstrates the evolving landscape of the electricity sector as it shifts from centralized to decentralized solutions through the application of ML innovations and distributed decision-making, ultimately shaping a more efficient and sustainable energy future.
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
402,321