id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2502.03843
Improving Natural Language Understanding for LLMs via Large-Scale Instruction Synthesis
High-quality, large-scale instructions are crucial for aligning large language models (LLMs), however, there is a severe shortage of instruction in the field of natural language understanding (NLU). Previous works on constructing NLU instructions mainly focus on information extraction (IE), neglecting tasks such as machine reading comprehension, question answering, and text classification. Furthermore, the lack of diversity in the data has led to a decreased generalization ability of trained LLMs in other NLU tasks and a noticeable decline in the fundamental model's general capabilities. To address this issue, we propose Hum, a large-scale, high-quality synthetic instruction corpus for NLU tasks, designed to enhance the NLU capabilities of LLMs. Specifically, Hum includes IE (either close IE or open IE), machine reading comprehension, text classification, and instruction generalist tasks, thereby enriching task diversity. Additionally, we introduce a human-LLMs collaborative mechanism to synthesize instructions, which enriches instruction diversity by incorporating guidelines, preference rules, and format variants. We conduct extensive experiments on 5 NLU tasks and 28 general capability evaluation datasets for LLMs. Experimental results show that Hum enhances the NLU capabilities of six LLMs by an average of 3.1\%, with no significant decline observed in other general capabilities.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
530,886
1905.05667
Evaluation Metrics for Unsupervised Learning Algorithms
Determining the quality of the results obtained by clustering techniques is a key issue in unsupervised machine learning. Many authors have discussed the desirable features of good clustering algorithms. However, Jon Kleinberg established an impossibility theorem for clustering. As a consequence, a wealth of studies have proposed techniques to evaluate the quality of clustering results depending on the characteristics of the clustering problem and the algorithmic technique employed to cluster data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
130,784
2405.06710
Mobile Sequencers
The article is an attempt to contribute to explorations of a common origin for language and planned-collaborative action. It gives `semantics of change' the central stage in the synthesis, from its history and recordkeeping to its development, its syntax, delivery and reception, including substratal aspects. It is suggested that to arrive at a common core, linguistic semantics must be understood as studying through syntax mobile agent's representing, tracking and coping with change and no change. Semantics of actions can be conceived the same way, but through plans instead of syntax. The key point is the following: Sequencing itself, of words and action sequences, brings in more structural interpretation to the sequence than which is immediately evident from the sequents themselves. Mobile sequencers can be understood as subjects structuring reporting, understanding and keeping track of change and no change. The idea invites rethinking of the notion of category, both in language and in planning. Understanding understanding change by mobile agents is suggested to be about human extended practice, not extended-human practice. That's why linguistics is as important as computer science in the synthesis. It must rely on representational history of acts, thoughts and expressions, personal and public, crosscutting overtness and covertness of these phenomena. It has implication for anthropology in the extended practice, which is covered briefly.
false
false
false
false
true
false
false
false
true
false
false
false
false
true
false
false
false
false
453,421
2409.16750
Distributed Robust Optimization Method for AC/MTDC Hybrid Power Systems with DC Network Cognizance
AC/multi-terminal DC (MTDC) hybrid power systems have emerged as a solution for the large-scale and longdistance accommodation of power produced by renewable energy systems (RESs). To ensure the optimal operation of such hybrid power systems, this paper addresses three key issues: system operational flexibility, centralized communication limitations, and RES uncertainties. Accordingly, a specific AC/DC optimal power flow (OPF) model and a distributed robust optimization method are proposed. Firstly, we apply a set of linear approximation and convex relaxation techniques to formulate the mixed-integer convex AC/DC OPF model. This model incorporates the DC network-cognizant constraint and enables DC topology reconfiguration. Next, generalized Benders decomposition (GBD) is employed to provide distributed optimization. Enhanced approaches are incorporated into GBD to achieve parallel computation and asynchronous updating. Additionally, the extreme scenario method (ESM) is embedded into the AC/DC OPF model to provide robust decisions to hedge against RES uncertainties. ESM is further extended to align the GBD procedure. Numerical results are finally presented to validate the effectiveness of our proposed method.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
491,490
2210.17311
Shared Manifold Learning Using a Triplet Network for Multiple Sensor Translation and Fusion with Missing Data
Heterogeneous data fusion can enhance the robustness and accuracy of an algorithm on a given task. However, due to the difference in various modalities, aligning the sensors and embedding their information into discriminative and compact representations is challenging. In this paper, we propose a Contrastive learning based MultiModal Alignment Network (CoMMANet) to align data from different sensors into a shared and discriminative manifold where class information is preserved. The proposed architecture uses a multimodal triplet autoencoder to cluster the latent space in such a way that samples of the same classes from each heterogeneous modality are mapped close to each other. Since all the modalities exist in a shared manifold, a unified classification framework is proposed. The resulting latent space representations are fused to perform more robust and accurate classification. In a missing sensor scenario, the latent space of one sensor is easily and efficiently predicted using another sensor's latent space, thereby allowing sensor translation. We conducted extensive experiments on a manually labeled multimodal dataset containing hyperspectral data from AVIRIS-NG and NEON, and LiDAR (light detection and ranging) data from NEON. Lastly, the model is validated on two benchmark datasets: Berlin Dataset (hyperspectral and synthetic aperture radar) and MUUFL Gulfport Dataset (hyperspectral and LiDAR). A comparison made with other methods demonstrates the superiority of this method. We achieved a mean overall accuracy of 94.3% on the MUUFL dataset and the best overall accuracy of 71.26% on the Berlin dataset, which is better than other state-of-the-art approaches.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
327,641
0811.4339
Finite Lattice-Size Effects in MIMO Detection
Many powerful data detection algorithms employed in multiple-input multiple-output (MIMO) communication systems, such as sphere decoding (SD) and lattice-reduction (LR)-aided detection, were initially designed for infinite lattices. Detection in MIMO systems is, however, based on finite lattices. In this paper, we systematically study the consequences of finite lattice-size for the performance and complexity of MIMO detection algorithms formulated for infinite lattices. Specifically, we find, considering performance and complexity, that LR does not seem to offer advantages when used in conjunction with SD.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
2,702
2303.06193
Adaptive Supervised PatchNCE Loss for Learning H&E-to-IHC Stain Translation with Inconsistent Groundtruth Image Pairs
Immunohistochemical (IHC) staining highlights the molecular information critical to diagnostics in tissue samples. However, compared to H&E staining, IHC staining can be much more expensive in terms of both labor and the laboratory equipment required. This motivates recent research that demonstrates that the correlations between the morphological information present in the H&E-stained slides and the molecular information in the IHC-stained slides can be used for H&E-to-IHC stain translation. However, due to a lack of pixel-perfect H&E-IHC groundtruth pairs, most existing methods have resorted to relying on expert annotations. To remedy this situation, we present a new loss function, Adaptive Supervised PatchNCE (ASP), to directly deal with the input to target inconsistencies in a proposed H&E-to-IHC image-to-image translation framework. The ASP loss is built upon a patch-based contrastive learning criterion, named Supervised PatchNCE (SP), and augments it further with weight scheduling to mitigate the negative impact of noisy supervision. Lastly, we introduce the Multi-IHC Stain Translation (MIST) dataset, which contains aligned H&E-IHC patches for 4 different IHC stains critical to breast cancer diagnosis. In our experiment, we demonstrate that our proposed method outperforms existing image-to-image translation methods for stain translation to multiple IHC stains. All of our code and datasets are available at https://github.com/lifangda01/AdaptiveSupervisedPatchNCE.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
350,731
2010.14603
Learning to be Safe: Deep RL with a Safety Critic
Safety is an essential component for deploying reinforcement learning (RL) algorithms in real-world scenarios, and is critical during the learning process itself. A natural first approach toward safe RL is to manually specify constraints on the policy's behavior. However, just as learning has enabled progress in large-scale development of AI systems, learning safety specifications may also be necessary to ensure safety in messy open-world environments where manual safety specifications cannot scale. Akin to how humans learn incrementally starting in child-safe environments, we propose to learn how to be safe in one set of tasks and environments, and then use that learned intuition to constrain future behaviors when learning new, modified tasks. We empirically study this form of safety-constrained transfer learning in three challenging domains: simulated navigation, quadruped locomotion, and dexterous in-hand manipulation. In comparison to standard deep RL techniques and prior approaches to safe RL, we find that our method enables the learning of new tasks and in new environments with both substantially fewer safety incidents, such as falling or dropping an object, and faster, more stable learning. This suggests a path forward not only for safer RL systems, but also for more effective RL systems.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
203,500
1703.05243
A Hybrid Supervised-unsupervised Method on Image Topic Visualization with Convolutional Neural Network and LDA
Given the progress in image recognition with recent data driven paradigms, it's still expensive to manually label a large training data to fit a convolutional neural network (CNN) model. This paper proposes a hybrid supervised-unsupervised method combining a pre-trained AlexNet with Latent Dirichlet Allocation (LDA) to extract image topics from both an unlabeled life-logging dataset and the COCO dataset. We generate the bag-of-words representations of an egocentric dataset from the softmax layer of AlexNet and use LDA to visualize the subject's living genre with duplicated images. We use a subset of COCO on 4 categories as ground truth, and define consistent rate to quantitatively analyze the performance of the method, it achieves 84% for consistent rate on average comparing to 18.75% from a raw CNN model. The method is capable of detecting false labels and multi-labels from COCO dataset. For scalability test, parallelization experiments are conducted with Harp-LDA on a Intel Knights Landing cluster: to extract 1,000 topic assignments for 241,035 COCO images, it takes 10 minutes with 60 threads.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
70,046
2307.12266
Transformer-based Joint Source Channel Coding for Textual Semantic Communication
The Space-Air-Ground-Sea integrated network calls for more robust and secure transmission techniques against jamming. In this paper, we propose a textual semantic transmission framework for robust transmission, which utilizes the advanced natural language processing techniques to model and encode sentences. Specifically, the textual sentences are firstly split into tokens using wordpiece algorithm, and are embedded to token vectors for semantic extraction by Transformer-based encoder. The encoded data are quantized to a fixed length binary sequence for transmission, where binary erasure, symmetric, and deletion channels are considered for transmission. The received binary sequences are further decoded by the transformer decoders into tokens used for sentence reconstruction. Our proposed approach leverages the power of neural networks and attention mechanism to provide reliable and efficient communication of textual data in challenging wireless environments, and simulation results on semantic similarity and bilingual evaluation understudy prove the superiority of the proposed model in semantic transmission.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
381,202
1404.4939
Bipartite Graph based Construction of Compressed Sensing Matrices
This paper proposes an efficient method to construct the bipartite graph with as many edges as possible while without introducing the shortest cycles of length equal to 4. The binary matrix associated with the bipartite graph described above presents comparable and even better phase transitions than Gaussian random matrices.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
32,451
1406.7330
Stock Market Prediction from WSJ: Text Mining via Sparse Matrix Factorization
We revisit the problem of predicting directional movements of stock prices based on news articles: here our algorithm uses daily articles from The Wall Street Journal to predict the closing stock prices on the same day. We propose a unified latent space model to characterize the "co-movements" between stock prices and news articles. Unlike many existing approaches, our new model is able to simultaneously leverage the correlations: (a) among stock prices, (b) among news articles, and (c) between stock prices and news articles. Thus, our model is able to make daily predictions on more than 500 stocks (most of which are not even mentioned in any news article) while having low complexity. We carry out extensive backtesting on trading strategies based on our algorithm. The result shows that our model has substantially better accuracy rate (55.7%) compared to many widely used algorithms. The return (56%) and Sharpe ratio due to a trading strategy based on our model are also much higher than baseline indices.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
34,201
1206.5065
A generic framework for video understanding applied to group behavior recognition
This paper presents an approach to detect and track groups of people in video-surveillance applications, and to automatically recognize their behavior. This method keeps track of individuals moving together by maintaining a spacial and temporal group coherence. First, people are individually detected and tracked. Second, their trajectories are analyzed over a temporal window and clustered using the Mean-Shift algorithm. A coherence value describes how well a set of people can be described as a group. Furthermore, we propose a formal event description language. The group events recognition approach is successfully validated on 4 camera views from 3 datasets: an airport, a subway, a shopping center corridor and an entrance hall.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
16,769
cs/0007035
Mapping WordNets Using Structural Information
We present a robust approach for linking already existing lexical/semantic hierarchies. We used a constraint satisfaction algorithm (relaxation labeling) to select --among a set of candidates-- the node in a target taxonomy that bests matches each node in a source taxonomy. In particular, we use it to map the nominal part of WordNet 1.5 onto WordNet 1.6, with a very high precision and a very low remaining ambiguity.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
537,168
2001.09950
Manipulating Deformable Objects by Interleaving Prediction, Planning, and Control
We present a framework for deformable object manipulation that interleaves planning and control, enabling complex manipulation tasks without relying on high-fidelity modeling or simulation. The key question we address is when should we use planning and when should we use control to achieve the task? Planners are designed to find paths through complex configuration spaces, but for highly underactuated systems, such as deformable objects, achieving a specific configuration is very difficult even with high-fidelity models. Conversely, controllers can be designed to achieve specific configurations, but they can be trapped in undesirable local minima due to obstacles. Our approach consists of three components: (1) A global motion planner to generate gross motion of the deformable object; (2) A local controller for refinement of the configuration of the deformable object; and (3) A novel deadlock prediction algorithm to determine when to use planning versus control. By separating planning from control we are able to use different representations of the deformable object, reducing overall complexity and enabling efficient computation of motion. We provide a detailed proof of probabilistic completeness for our planner, which is valid despite the fact that our system is underactuated and we do not have a steering function. We then demonstrate that our framework is able to successfully perform several manipulation tasks with rope and cloth in simulation which cannot be performed using either our controller or planner alone. These experiments suggest that our planner can generate paths efficiently, taking under a second on average to find a feasible path in three out of four scenarios. We also show that our framework is effective on a 16 DoF physical robot, where reachability and dual-arm constraints make the planning more difficult.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
161,706
2310.17620
Radar-Only Off-Road Local Navigation
Off-road robotics have traditionally utilized lidar for local navigation due to its accuracy and high resolution. However, the limitations of lidar, such as reduced performance in harsh environmental conditions and limited range, have prompted the exploration of alternative sensing technologies. This paper investigates the potential of radar for off-road local navigation, as it offers the advantages of a longer range and the ability to penetrate dust and light vegetation. We adapt existing lidar-based methods for radar and evaluate the performance in comparison to lidar under various off-road conditions. We show that radar can provide a significant range advantage over lidar while maintaining accuracy for both ground plane estimation and obstacle detection. And finally, we demonstrate successful autonomous navigation at a speed of 2.5 m/s over a path length of 350 m using only radar for ground plane estimation and obstacle detection.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
403,211
2112.02792
Incentive Compatible Pareto Alignment for Multi-Source Large Graphs
In this paper, we focus on learning effective entity matching models over multi-source large-scale data. For real applications, we relax typical assumptions that data distributions/spaces, or entity identities are shared between sources, and propose a Relaxed Multi-source Large-scale Entity-matching (RMLE) problem. Challenges of the problem include 1) how to align large-scale entities between sources to share information and 2) how to mitigate negative transfer from joint learning multi-source data. What's worse, one practical issue is the entanglement between both challenges. Specifically, incorrect alignments may increase negative transfer; while mitigating negative transfer for one source may result in poorly learned representations for other sources and then decrease alignment accuracy. To handle the entangled challenges, we point out that the key is to optimize information sharing first based on Pareto front optimization, by showing that information sharing significantly influences the Pareto front which depicts lower bounds of negative transfer. Consequently, we proposed an Incentive Compatible Pareto Alignment (ICPA) method to first optimize cross-source alignments based on Pareto front optimization, then mitigate negative transfer constrained on the optimized alignments. This mechanism renders each source can learn based on its true preference without worrying about deteriorating representations of other sources. Specifically, the Pareto front optimization encourages minimizing lower bounds of negative transfer, which optimizes whether and which to align. Comprehensive empirical evaluation results on four large-scale datasets are provided to demonstrate the effectiveness and superiority of ICPA. Online A/B test results at a search advertising platform also demonstrate the effectiveness of ICPA in production environments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
269,963
2408.11527
The Vizier Gaussian Process Bandit Algorithm
Google Vizier has performed millions of optimizations and accelerated numerous research and production systems at Google, demonstrating the success of Bayesian optimization as a large-scale service. Over multiple years, its algorithm has been improved considerably, through the collective experiences of numerous research efforts and user feedback. In this technical report, we discuss the implementation details and design choices of the current default algorithm provided by Open Source Vizier. Our experiments on standardized benchmarks reveal its robustness and versatility against well-established industry baselines on multiple practical modes.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
482,330
2307.10420
GOOSE Algorithm: A Powerful Optimization Tool for Real-World Engineering Challenges and Beyond
This study proposes the GOOSE algorithm as a novel metaheuristic algorithm based on the goose's behavior during rest and foraging. The goose stands on one leg and keeps his balance to guard and protect other individuals in the flock. The GOOSE algorithm is benchmarked on 19 well-known benchmark test functions, and the results are verified by a comparative study with genetic algorithm (GA), particle swarm optimization (PSO), dragonfly algorithm (DA), and fitness dependent optimizer (FDO). In addition, the proposed algorithm is tested on 10 modern benchmark functions, and the gained results are compared with three recent algorithms, such as the dragonfly algorithm, whale optimization algorithm (WOA), and salp swarm algorithm (SSA). Moreover, the GOOSE algorithm is tested on 5 classical benchmark functions, and the obtained results are evaluated with six algorithms, such as fitness dependent optimizer (FDO), FOX optimizer, butterfly optimization algorithm (BOA), whale optimization algorithm, dragonfly algorithm, and chimp optimization algorithm (ChOA). The achieved findings attest to the proposed algorithm's superior performance compared to the other algorithms that were utilized in the current study. The technique is then used to optimize Welded beam design and Economic Load Dispatch Problem, three renowned real-world engineering challenges, and the Pathological IgG Fraction in the Nervous System. The outcomes of the engineering case studies illustrate how well the suggested approach can optimize issues that arise in the real-world.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
380,537
2202.08686
Necessary and sufficient condition for a generic 3R serial manipulator to be cuspidal
Cuspidal robots can travel from one inverse kinematic solution to another without meeting a singularity. The name cuspidal was coined based on the existence of a cusp point in the workspace of 3R serial robots. The existence of a cusp point was proved to be a necessary and sufficient condition for orthogonal robots to be cuspidal, but it was not possible to extend this condition to non-orthogonal robots. The goal of this paper is to prove that this condition stands for any generic 3R robot. This result would give the designer more flexibility. In the presented work, the geometrical interpretation of the inverse kinematics of 3R robots is revisited and important observations on the nonsingular change of posture are noted. The paper presents a theorem regarding the existence of reduced aspects in any generic 3R serial robot. Based on these observations and on this theorem, we prove that the existence of a cusp point is a necessary and sufficient condition for any 3R generic robot to be cuspidal.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
280,959
2305.18537
Biconnection Gravity as a Statistical Manifold
We formulate a bi-Connection Theory of Gravity whose Gravitational action consists of a recently defined mutual curvature scalar. Namely, we build a gravitational theory consisting of one metric and two affine connections, in a Metric-Affine Gravity setup. Consequently, coupling the two connections on an equal footing with matter, we show that the geometry of the resulting theory is, quite intriguingly, that of Statistical Manifold. This ultimately indicates a remarkable mathematical correspondence between Gravity and Information Geometry.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
369,128
2305.17733
Investigating Pre-trained Audio Encoders in the Low-Resource Condition
Pre-trained speech encoders have been central to pushing state-of-the-art results across various speech understanding and generation tasks. Nonetheless, the capabilities of these encoders in low-resource settings are yet to be thoroughly explored. To address this, we conduct a comprehensive set of experiments using a representative set of 3 state-of-the-art encoders (Wav2vec2, WavLM, Whisper) in the low-resource setting across 7 speech understanding and generation tasks. We provide various quantitative and qualitative analyses on task performance, convergence speed, and representational properties of the encoders. We observe a connection between the pre-training protocols of these encoders and the way in which they capture information in their internal layers. In particular, we observe the Whisper encoder exhibits the greatest low-resource capabilities on content-driven tasks in terms of performance and convergence speed.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
368,727
1101.0133
Enabling Node Repair in Any Erasure Code for Distributed Storage
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two 'types' and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
8,686
1503.04904
Distributed Continuous-time Approximate Projection Protocols for Shortest Distance Optimization Problems
In this paper, we investigate the distributed shortest distance optimization problem for a multi-agent network to cooperatively minimize the sum of the quadratic distances from some convex sets, where each set is only associated with one agent. To deal with the optimization problem with projection uncertainties, we propose a distributed continuous-time dynamical protocol based on a new concept of approximate projection. Here each agent can only obtain an approximate projection point on the boundary of its convex set, and communicate with its neighbors over a time-varying communication graph. First, we show that no matter how large the approximate angle is, the system states are always bounded for any initial condition, and uniformly bounded with respect to all initial conditions if the inferior limit of the stepsize is greater than zero. Then, in the two cases, nonempty intersection and empty intersection of convex sets, we provide stepsize and approximate angle conditions to ensure the optimal convergence, respectively. Moreover, we give some characterizations about the optimal solutions for the empty intersection case and also present the convergence error between agents' estimates and the optimal point in the case of constant stepsizes and approximate angles.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
41,194
1109.5404
Towards Optimal Learning of Chain Graphs
In this paper, we extend Meek's conjecture (Meek 1997) from directed and acyclic graphs to chain graphs, and prove that the extended conjecture is true. Specifically, we prove that if a chain graph H is an independence map of the independence model induced by another chain graph G, then (i) G can be transformed into H by a sequence of directed and undirected edge additions and feasible splits and mergings, and (ii) after each operation in the sequence H remains an independence map of the independence model induced by G. Our result has the same important consequence for learning chain graphs from data as the proof of Meek's conjecture in (Chickering 2002) had for learning Bayesian networks from data: It makes it possible to develop efficient and asymptotically correct learning algorithms under mild assumptions.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
12,315
2501.18782
PSO-Net: Development of an automated psoriasis assessment system using attention-based interpretable deep neural networks
Psoriasis is a chronic skin condition that requires long-term treatment and monitoring. Although, the Psoriasis Area and Severity Index (PASI) is utilized as a standard measurement to assess psoriasis severity in clinical trials, it has many drawbacks such as (1) patient burden for in-person clinic visits for assessment of psoriasis, (2) time required for investigator scoring and (3) variability of inter- and intra-rater scoring. To address these drawbacks, we propose a novel and interpretable deep learning architecture called PSO-Net, which maps digital images from different anatomical regions to derive attention-based scores. Regional scores are further combined to estimate an absolute PASI score. Moreover, we devise a novel regression activation map for interpretability through ranking attention scores. Using this approach, we achieved inter-class correlation scores of 82.2% [95% CI: 77- 87%] and 87.8% [95% CI: 84-91%] with two different clinician raters, respectively.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
528,850
2407.01635
Commute Graph Neural Networks
Graph Neural Networks (GNNs) have shown remarkable success in learning from graph-structured data. However, their application to directed graphs (digraphs) presents unique challenges, primarily due to the inherent asymmetry in node relationships. Traditional GNNs are adept at capturing unidirectional relations but fall short in encoding the mutual path dependencies between nodes, such as asymmetrical shortest paths typically found in digraphs. Recognizing this gap, we introduce Commute Graph Neural Networks (CGNN), an approach that seamlessly integrates node-wise commute time into the message passing scheme. The cornerstone of CGNN is an efficient method for computing commute time using a newly formulated digraph Laplacian. Commute time is then integrated into the neighborhood aggregation process, with neighbor contributions weighted according to their respective commute time to the central node in each layer. It enables CGNN to directly capture the mutual, asymmetric relationships in digraphs. Extensive experiments confirm the superior performance of CGNN.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
469,396
2502.07549
HGTUL: A Hypergraph-based Model For Trajectory User Linking
Trajectory User Linking (TUL), which links anonymous trajectories with users who generate them, plays a crucial role in modeling human mobility. Despite significant advancements in this field, existing studies primarily neglect the high-order inter-trajectory relationships, which represent complex associations among multiple trajectories, manifested through multi-location co-occurrence patterns emerging when trajectories intersect at various Points of Interest (POIs). Furthermore, they also overlook the variable influence of POIs on different trajectories, as well as the user class imbalance problem caused by disparities in user activity levels and check-in frequencies. To address these limitations, we propose a novel HyperGraph-based multi-perspective Trajectory User Linking model (HGTUL). Our model learns trajectory representations from both relational and spatio-temporal perspectives: (1) it captures high-order associations among trajectories by constructing a trajectory hypergraph and leverages a hypergraph attention network to learn the variable impact of POIs on trajectories; (2) it models the spatio-temporal characteristics of trajectories by incorporating their temporal and spatial information into a sequential encoder. Moreover, we design a data balancing method to effectively address the user class imbalance problem and experimentally validate its significance in TUL. Extensive experiments on three real-world datasets demonstrate that HGTUL outperforms state-of-the-art baselines, achieving improvements of 2.57%~20.09% and 5.68%~26.00% in ACC@1 and Macro-F1 metrics, respectively.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
532,655
1904.08280
UAV Positioning and Power Control for Two-Way Wireless Relaying
This paper considers an unmanned-aerial-vehicle-enabled (UAV-enabled) wireless network where a relay UAV is used for two-way communications between a ground base station (BS) and a set of distant user equipment (UE). The UAV adopts the amplify-and-forward strategy for two-way relaying over orthogonal frequency bands. The UAV positioning and the transmission powers of all nodes are jointly designed to maximize the sum rate of both uplink and downlink subject to transmission power constraints and the signal-to-noise ratio constraint on the UAV control channel. The formulated joint positioning and power control (JPPC) problem has an intricate expression of the sum rate due to two-way transmissions and is difficult to solve in general. We propose a novel concave surrogate function for the sum rate and employ the successive convex approximation (SCA) technique for obtaining a high-quality approximate solution. We show that the proposed surrogate function has a small curvature and enables a fast convergence of SCA. Furthermore, we develop a computationally efficient JPPC algorithm by applying the FISTA-type accelerated gradient projection (AGP) algorithm to solve the SCA problem as well as one of the projection subproblem, resulting in a double-loop AGP method. Simulation results show that the proposed JPPC algorithms are not only computationally efficient but also greatly outperform the heuristic approaches.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
128,019
2101.06891
A note on the price of bandit feedback for mistake-bounded online learning
The standard model and the bandit model are two generalizations of the mistake-bound model to online multiclass classification. In both models the learner guesses a classification in each round, but in the standard model the learner recieves the correct classification after each guess, while in the bandit model the learner is only told whether or not their guess is correct in each round. For any set $F$ of multiclass classifiers, define $opt_{std}(F)$ and $opt_{bandit}(F)$ to be the optimal worst-case number of prediction mistakes in the standard and bandit models respectively. Long (Theoretical Computer Science, 2020) claimed that for all $M > 2$ and infinitely many $k$, there exists a set $F$ of functions from a set $X$ to a set $Y$ of size $k$ such that $opt_{std}(F) = M$ and $opt_{bandit}(F) \ge (1 - o(1))(|Y|\ln{|Y|})opt_{std}(F)$. The proof of this result depended on the following lemma, which is false e.g. for all prime $p \ge 5$, $s = \mathbf{1}$ (the all $1$ vector), $t = \mathbf{2}$ (the all $2$ vector), and all $z$. Lemma: Fix $n \ge 2$ and prime $p$, and let $u$ be chosen uniformly at random from $\left\{0, \dots, p-1\right\}^n$. For any $s, t \in \left\{1, \dots, p-1\right\}^n$ with $s \neq t$ and for any $z \in \left\{0, \dots, p-1\right\}$, we have $\Pr(t \cdot u = z \mod p \text{ } | \text{ } s \cdot u = z \mod p) = \frac{1}{p}$. We show that this lemma is false precisely when $s$ and $t$ are multiples of each other mod $p$. Then using a new lemma, we fix Long's proof.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
215,872
2103.01910
MultiSubs: A Large-scale Multimodal and Multilingual Dataset
This paper introduces a large-scale multimodal and multilingual dataset that aims to facilitate research on grounding words to images in their contextual usage in language. The dataset consists of images selected to unambiguously illustrate concepts expressed in sentences from movie subtitles. The dataset is a valuable resource as (i) the images are aligned to text fragments rather than whole sentences; (ii) multiple images are possible for a text fragment and a sentence; (iii) the sentences are free-form and real-world like; (iv) the parallel texts are multilingual. We set up a fill-in-the-blank game for humans to evaluate the quality of the automatic image selection process of our dataset. We show the utility of the dataset on two automatic tasks: (i) fill-in-the-blank; (ii) lexical translation. Results of the human evaluation and automatic models demonstrate that images can be a useful complement to the textual context. The dataset will benefit research on visual grounding of words especially in the context of free-form sentences, and can be obtained from https://doi.org/10.5281/zenodo.5034604 under a Creative Commons licence.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
222,788
2201.13011
On the Power-Law Hessian Spectrums in Deep Learning
It is well-known that the Hessian of deep loss landscape matters to optimization, generalization, and even robustness of deep learning. Recent works empirically discovered that the Hessian spectrum in deep learning has a two-component structure that consists of a small number of large eigenvalues and a large number of nearly-zero eigenvalues. However, the theoretical mechanism or the mathematical behind the Hessian spectrum is still largely under-explored. To the best of our knowledge, we are the first to demonstrate that the Hessian spectrums of well-trained deep neural networks exhibit simple power-law structures. Inspired by the statistical physical theories and the spectral analysis of natural proteins, we provide a maximum-entropy theoretical interpretation for explaining why the power-law structure exist and suggest a spectral parallel between protein evolution and training of deep neural networks. By conducing extensive experiments, we further use the power-law spectral framework as a useful tool to explore multiple novel behaviors of deep learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
277,860
2502.04849
Advancing Wasserstein Convergence Analysis of Score-Based Models: Insights from Discretization and Second-Order Acceleration
Score-based diffusion models have emerged as powerful tools in generative modeling, yet their theoretical foundations remain underexplored. In this work, we focus on the Wasserstein convergence analysis of score-based diffusion models. Specifically, we investigate the impact of various discretization schemes, including Euler discretization, exponential integrators, and midpoint randomization methods. Our analysis provides a quantitative comparison of these discrete approximations, emphasizing their influence on convergence behavior. Furthermore, we explore scenarios where Hessian information is available and propose an accelerated sampler based on the local linearization method. We demonstrate that this Hessian-based approach achieves faster convergence rates of order $\widetilde{\mathcal{O}}\left(\frac{1}{\varepsilon}\right)$ significantly improving upon the standard rate $\widetilde{\mathcal{O}}\left(\frac{1}{\varepsilon^2}\right)$ of vanilla diffusion models, where $\varepsilon$ denotes the target accuracy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
531,342
2404.01207
Vision-language models for decoding provider attention during neonatal resuscitation
Neonatal resuscitations demand an exceptional level of attentiveness from providers, who must process multiple streams of information simultaneously. Gaze strongly influences decision making; thus, understanding where a provider is looking during neonatal resuscitations could inform provider training, enhance real-time decision support, and improve the design of delivery rooms and neonatal intensive care units (NICUs). Current approaches to quantifying neonatal providers' gaze rely on manual coding or simulations, which limit scalability and utility. Here, we introduce an automated, real-time, deep learning approach capable of decoding provider gaze into semantic classes directly from first-person point-of-view videos recorded during live resuscitations. Combining state-of-the-art, real-time segmentation with vision-language models (CLIP), our low-shot pipeline attains 91\% classification accuracy in identifying gaze targets without training. Upon fine-tuning, the performance of our gaze-guided vision transformer exceeds 98\% accuracy in gaze classification, approaching human-level precision. This system, capable of real-time inference, enables objective quantification of provider attention dynamics during live neonatal resuscitation. Our approach offers a scalable solution that seamlessly integrates with existing infrastructure for data-scarce gaze analysis, thereby offering new opportunities for understanding and refining clinical decision making.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
443,312
2203.05553
Transfer of Representations to Video Label Propagation: Implementation Factors Matter
This work studies feature representations for dense label propagation in video, with a focus on recently proposed methods that learn video correspondence using self-supervised signals such as colorization or temporal cycle consistency. In the literature, these methods have been evaluated with an array of inconsistent settings, making it difficult to discern trends or compare performance fairly. Starting with a unified formulation of the label propagation algorithm that encompasses most existing variations, we systematically study the impact of important implementation factors in feature extraction and label propagation. Along the way, we report the accuracies of properly tuned supervised and unsupervised still image baselines, which are higher than those found in previous works. We also demonstrate that augmenting video-based correspondence cues with still-image-based ones can further improve performance. We then attempt a fair comparison of recent video-based methods on the DAVIS benchmark, showing convergence of best methods to performance levels near our strong ImageNet baseline, despite the usage of a variety of specialized video-based losses and training particulars. Additional comparisons on JHMDB and VIP datasets confirm the similar performance of current methods. We hope that this study will help to improve evaluation practices and better inform future research directions in temporal correspondence.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
284,834
2312.15610
Towards Learning Geometric Eigen-Lengths Crucial for Fitting Tasks
Some extremely low-dimensional yet crucial geometric eigen-lengths often determine the success of some geometric tasks. For example, the height of an object is important to measure to check if it can fit between the shelves of a cabinet, while the width of a couch is crucial when trying to move it through a doorway. Humans have materialized such crucial geometric eigen-lengths in common sense since they are very useful in serving as succinct yet effective, highly interpretable, and universal object representations. However, it remains obscure and underexplored if learning systems can be equipped with similar capabilities of automatically discovering such key geometric quantities from doing tasks. In this work, we therefore for the first time formulate and propose a novel learning problem on this question and set up a benchmark suite including tasks, data, and evaluation metrics for studying the problem. We focus on a family of common fitting tasks as the testbed for the proposed learning problem. We explore potential solutions and demonstrate the feasibility of learning eigen-lengths from simply observing successful and failed fitting trials. We also attempt geometric grounding for more accurate eigen-length measurement and study the reusability of the learned eigen-lengths across multiple tasks. Our work marks the first exploratory step toward learning crucial geometric eigen-lengths and we hope it can inspire future research in tackling this important yet underexplored problem.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
418,065
2203.00638
PaSca: a Graph Neural Architecture Search System under the Scalable Paradigm
Graph neural networks (GNNs) have achieved state-of-the-art performance in various graph-based tasks. However, as mainstream GNNs are designed based on the neural message passing mechanism, they do not scale well to data size and message passing steps. Although there has been an emerging interest in the design of scalable GNNs, current researches focus on specific GNN design, rather than the general design space, limiting the discovery of potential scalable GNN models. This paper proposes PasCa, a new paradigm and system that offers a principled approach to systemically construct and explore the design space for scalable GNNs, rather than studying individual designs. Through deconstructing the message passing mechanism, PasCa presents a novel Scalable Graph Neural Architecture Paradigm (SGAP), together with a general architecture design space consisting of 150k different designs. Following the paradigm, we implement an auto-search engine that can automatically search well-performing and scalable GNN architectures to balance the trade-off between multiple criteria (e.g., accuracy and efficiency) via multi-objective optimization. Empirical studies on ten benchmark datasets demonstrate that the representative instances (i.e., PasCa-V1, V2, and V3) discovered by our system achieve consistent performance among competitive baselines. Concretely, PasCa-V3 outperforms the state-of-the-art GNN method JK-Net by 0.4\% in terms of predictive accuracy on our large industry dataset while achieving up to $28.3\times$ training speedups.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
283,072
2210.15497
LSG Attention: Extrapolation of pretrained Transformers to long sequences
Transformer models achieve state-of-the-art performance on a wide range of NLP tasks. They however suffer from a prohibitive limitation due to the self-attention mechanism, inducing $O(n^2)$ complexity with regard to sequence length. To answer this limitation we introduce the LSG architecture which relies on Local, Sparse and Global attention. We show that LSG attention is fast, efficient and competitive in classification and summarization tasks on long documents. Interestingly, it can also be used to adapt existing pretrained models to efficiently extrapolate to longer sequences with no additional training. Along with the introduction of the LSG attention mechanism, we propose tools to train new models and adapt existing ones based on this mechanism.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
326,980
2109.02716
Vision Transformers For Weeds and Crops Classification Of High Resolution UAV Images
Crop and weed monitoring is an important challenge for agriculture and food production nowadays. Thanks to recent advances in data acquisition and computation technologies, agriculture is evolving to a more smart and precision farming to meet with the high yield and high quality crop production. Classification and recognition in Unmanned Aerial Vehicles (UAV) images are important phases for crop monitoring. Advances in deep learning models relying on Convolutional Neural Network (CNN) have achieved high performances in image classification in the agricultural domain. Despite the success of this architecture, CNN still faces many challenges such as high computation cost, the need of large labelled datasets, ... Natural language processing's transformer architecture can be an alternative approach to deal with CNN's limitations. Making use of the self-attention paradigm, Vision Transformer (ViT) models can achieve competitive or better results without applying any convolution operations. In this paper, we adopt the self-attention mechanism via the ViT models for plant classification of weeds and crops: red beet, off-type beet (green leaves), parsley and spinach. Our experiments show that with small set of labelled training data, ViT models perform better compared to state-of-the-art CNN-based models EfficientNet and ResNet, with a top accuracy of 99.8\% achieved by the ViT model.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
253,827
2211.05284
FormLM: Recommending Creation Ideas for Online Forms by Modelling Semantic and Structural Information
Online forms are widely used to collect data from human and have a multi-billion market. Many software products provide online services for creating semi-structured forms where questions and descriptions are organized by pre-defined structures. However, the design and creation process of forms is still tedious and requires expert knowledge. To assist form designers, in this work we present FormLM to model online forms (by enhancing pre-trained language model with form structural information) and recommend form creation ideas (including question / options recommendations and block type suggestion). For model training and evaluation, we collect the first public online form dataset with 62K online forms. Experiment results show that FormLM significantly outperforms general-purpose language models on all tasks, with an improvement by 4.71 on Question Recommendation and 10.6 on Block Type Suggestion in terms of ROUGE-1 and Macro-F1, respectively.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
329,498
2001.00888
Towards Scalable Dataframe Systems
Dataframes are a popular abstraction to represent, prepare, and analyze data. Despite the remarkable success of dataframe libraries in Rand Python, dataframes face performance issues even on moderately large datasets. Moreover, there is significant ambiguity regarding dataframe semantics. In this paper we lay out a vision and roadmap for scalable dataframe systems. To demonstrate the potential in this area, we report on our experience building MODIN, a scaled-up implementation of the most widely-used and complex dataframe API today, Python's pandas. With pandas as a reference, we propose a simple data model and algebra for dataframes to ground discussion in the field. Given this foundation, we lay out an agenda of open research opportunities where the distinct features of dataframes will require extending the state of the art in many dimensions of data management. We discuss the implications of signature data-frame features including flexible schemas, ordering, row/column equivalence, and data/metadata fluidity, as well as the piecemeal, trial-and-error-based approach to interacting with dataframes.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
159,344
2305.11811
Monte-Carlo Search for an Equilibrium in Dec-POMDPs
Decentralized partially observable Markov decision processes (Dec-POMDPs) formalize the problem of designing individual controllers for a group of collaborative agents under stochastic dynamics and partial observability. Seeking a global optimum is difficult (NEXP complete), but seeking a Nash equilibrium -- each agent policy being a best response to the other agents -- is more accessible, and allowed addressing infinite-horizon problems with solutions in the form of finite state controllers. In this paper, we show that this approach can be adapted to cases where only a generative model (a simulator) of the Dec-POMDP is available. This requires relying on a simulation-based POMDP solver to construct an agent's FSC node by node. A related process is used to heuristically derive initial FSCs. Experiment with benchmarks shows that MC-JESP is competitive with exisiting Dec-POMDP solvers, even better than many offline methods using explicit models.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
365,704
2109.00238
Complexity Measures for Multi-objective Symbolic Regression
Multi-objective symbolic regression has the advantage that while the accuracy of the learned models is maximized, the complexity is automatically adapted and need not be specified a-priori. The result of the optimization is not a single solution anymore, but a whole Pareto-front describing the trade-off between accuracy and complexity. In this contribution we study which complexity measures are most appropriately used in symbolic regression when performing multi- objective optimization with NSGA-II. Furthermore, we present a novel complexity measure that includes semantic information based on the function symbols occurring in the models and test its effects on several benchmark datasets. Results comparing multiple complexity measures are presented in terms of the achieved accuracy and model length to illustrate how the search direction of the algorithm is affected.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
253,052
2412.15876
AI-in-the-loop: The future of biomedical visual analytics applications in the era of AI
AI is the workhorse of modern data analytics and omnipresent across many sectors. Large Language Models and multi-modal foundation models are today capable of generating code, charts, visualizations, etc. How will these massive developments of AI in data analytics shape future data visualizations and visual analytics workflows? What is the potential of AI to reshape methodology and design of future visual analytics applications? What will be our role as visualization researchers in the future? What are opportunities, open challenges and threats in the context of an increasingly powerful AI? This Visualization Viewpoint discusses these questions in the special context of biomedical data analytics as an example of a domain in which critical decisions are taken based on complex and sensitive data, with high requirements on transparency, efficiency, and reliability. We map recent trends and developments in AI on the elements of interactive visualization and visual analytics workflows and highlight the potential of AI to transform biomedical visualization as a research field. Given that agency and responsibility have to remain with human experts, we argue that it is helpful to keep the focus on human-centered workflows, and to use visual analytics as a tool for integrating ``AI-in-the-loop''. This is in contrast to the more traditional term ``human-in-the-loop'', which focuses on incorporating human expertise into AI-based systems.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
519,297
2305.01442
A Direct Construction of Optimal Symmetrical Z-Complementary Code Sets of Prime Power Lengths
This paper presents a direct construction of an optimal symmetrical Z-complementary code set (SZCCS) of prime power lengths using a multi-variable function (MVF). SZCCS is a natural extension of the Z-complementary code set (ZCCS), which has only front-end zero correlation zone (ZCZ) width. SZCCS has both front-end and tail-end ZCZ width. SZCCSs are used in developing optimal training sequences for broadband generalized spatial modulation systems over frequency-selective channels because they have ZCZ width on both the front and tail ends. The construction of optimal SZCCS with large set sizes and prime power lengths is presented for the first time in this paper. Furthermore, it is worth noting that several existing works on ZCCS and SZCCS can be viewed as special cases of the proposed construction.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
361,674
2212.01485
A Theory of Semantic Communication
Semantic communication is an emerging research area that has gained a wide range of attention recently. Despite this growing interest, there remains a notable absence of a comprehensive and widely-accepted framework for characterizing semantic communication. This paper introduces a new conceptualization of semantic communication and formulates two fundamental problems, which we term language exploitation and language design. Our contention is that the challenge of language design can be effectively situated within the broader framework of joint source-channel coding theory, underpinned by a comprehensive end-to-end distortion metric. To tackle the language exploitation problem, we put forth three approaches: semantic encoding, semantic decoding, and a synergistic combination of both in the form of combined semantic encoding and decoding. Furthermore, we establish the semantic distortion-cost region as a critical framework for assessing the language exploitation problem. For each of the three proposed approaches, the achievable distortion-cost region is characterized. Overall, this paper aims to shed light on the intricate dynamics of semantic communication, paving the way for a deeper understanding of this evolving field.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
334,447
2205.10363
Robust Task-Oriented Dialogue Generation with Contrastive Pre-training and Adversarial Filtering
Data artifacts incentivize machine learning models to learn non-transferable generalizations by taking advantage of shortcuts in the data, and there is growing evidence that data artifacts play a role for the strong results that deep learning models achieve in recent natural language processing benchmarks. In this paper, we focus on task-oriented dialogue and investigate whether popular datasets such as MultiWOZ contain such data artifacts. We found that by only keeping frequent phrases in the training examples, state-of-the-art models perform similarly compared to the variant trained with full data, suggesting they exploit these spurious correlations to solve the task. Motivated by this, we propose a contrastive learning based framework to encourage the model to ignore these cues and focus on learning generalisable patterns. We also experiment with adversarial filtering to remove "easy" training instances so that the model would focus on learning from the "harder" instances. We conduct a number of generalization experiments -- e.g., cross-domain/dataset and adversarial tests -- to assess the robustness of our approach and found that it works exceptionally well.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
297,673
2208.13007
Multi-level Contrastive Learning Framework for Sequential Recommendation
Sequential recommendation (SR) aims to predict the subsequent behaviors of users by understanding their successive historical behaviors. Recently, some methods for SR are devoted to alleviating the data sparsity problem (i.e., limited supervised signals for training), which take account of contrastive learning to incorporate self-supervised signals into SR. Despite their achievements, it is far from enough to learn informative user/item embeddings due to the inadequacy modeling of complex collaborative information and co-action information, such as user-item relation, user-user relation, and item-item relation. In this paper, we study the problem of SR and propose a novel multi-level contrastive learning framework for sequential recommendation, named MCLSR. Different from the previous contrastive learning-based methods for SR, MCLSR learns the representations of users and items through a cross-view contrastive learning paradigm from four specific views at two different levels (i.e., interest- and feature-level). Specifically, the interest-level contrastive mechanism jointly learns the collaborative information with the sequential transition patterns, and the feature-level contrastive mechanism re-observes the relation between users and items via capturing the co-action information (i.e., co-occurrence). Extensive experiments on four real-world datasets show that the proposed MCLSR outperforms the state-of-the-art methods consistently.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
314,921
2102.08738
Rate-Splitting Multiple Access for Multi-Antenna Broadcast Channel with Imperfect CSIT and CSIR
Rate-splitting multiple access (RSMA) has appeared as a powerful transmission and multiple access strategy for multi-user multi-antenna communications. Uniquely, this paper studies the optimization of the sum-rate of RSMA with imperfect channel state information (CSI) at the transmitter (CSIT) and the receivers (CSIR). The robustness of the RSMA approach against imperfect CSIT has been investigated in the previous studies while there has been no consideration for the effects of imperfect CSIR. This motivates us to develop a robust design relying on RSMA in the presence of both imperfect CSIT and CSIR. Since the optimization problem for the design of RSMA precoder and power allocations to maximize the sum-rate is non-convex, it is hard to solve directly. To tackle the non-convexity, we propose a novel alternating optimization algorithm based on semidefinite relaxation (SDR) and concave-convex procedure (CCCP) techniques. By comparing simulation results with conventional methods, it turns out that RSMA is quite robust to imperfect CSIR and CSIT, thereby improving the sum-rate performance.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
220,552
2203.11876
Open-Vocabulary DETR with Conditional Matching
Open-vocabulary object detection, which is concerned with the problem of detecting novel objects guided by natural language, has gained increasing attention from the community. Ideally, we would like to extend an open-vocabulary detector such that it can produce bounding box predictions based on user inputs in form of either natural language or exemplar image. This offers great flexibility and user experience for human-computer interaction. To this end, we propose a novel open-vocabulary detector based on DETR -- hence the name OV-DETR -- which, once trained, can detect any object given its class name or an exemplar image. The biggest challenge of turning DETR into an open-vocabulary detector is that it is impossible to calculate the classification cost matrix of novel classes without access to their labeled images. To overcome this challenge, we formulate the learning objective as a binary matching one between input queries (class name or exemplar image) and the corresponding objects, which learns useful correspondence to generalize to unseen queries during testing. For training, we choose to condition the Transformer decoder on the input embeddings obtained from a pre-trained vision-language model like CLIP, in order to enable matching for both text and image queries. With extensive experiments on LVIS and COCO datasets, we demonstrate that our OV-DETR -- the first end-to-end Transformer-based open-vocabulary detector -- achieves non-trivial improvements over current state of the arts.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
287,073
1707.04457
Modeling Harmony with Skip-Grams
String-based (or viewpoint) models of tonal harmony often struggle with data sparsity in pattern discovery and prediction tasks, particularly when modeling composite events like triads and seventh chords, since the number of distinct n-note combinations in polyphonic textures is potentially enormous. To address this problem, this study examines the efficacy of skip-grams in music research, an alternative viewpoint method developed in corpus linguistics and natural language processing that includes sub-sequences of n events (or n-grams) in a frequency distribution if their constituent members occur within a certain number of skips. Using a corpus consisting of four datasets of Western classical music in symbolic form, we found that including skip-grams reduces data sparsity in n-gram distributions by (1) minimizing the proportion of n-grams with negligible counts, and (2) increasing the coverage of contiguous n-grams in a test corpus. What is more, skip-grams significantly outperformed contiguous n-grams in discovering conventional closing progressions (called cadences).
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
77,041
1812.02160
Characterization and space embedding of directed graphs and social networks through magnetic Laplacians
Though commonly found in the real world, directed networks have received relatively less attention from the literature in which concerns their topological and dynamical characteristics. In this work, we develop a magnetic Laplacian-based framework that can be used for studying directed complex networks. More specifically, we introduce a specific heat measurement that can help to characterize the network topology. It is shown that, by using this approach, it is possible to identify the types of several networks, as well as to infer parameters underlying specific network configurations. Then, we consider the dynamics associated with the magnetic Laplacian as a means of embedding networks into a metric space, allowing the identification of mesoscopic structures in artificial networks or unravel the polarization on political blogosphere. By defining a coarse-graining procedure in this metric space, we show how to connect the specific heat measurement and the positions of nodes in this space.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
115,692
2110.04754
Towards High-fidelity Singing Voice Conversion with Acoustic Reference and Contrastive Predictive Coding
Recently, phonetic posteriorgrams (PPGs) based methods have been quite popular in non-parallel singing voice conversion systems. However, due to the lack of acoustic information in PPGs, style and naturalness of the converted singing voices are still limited. To solve these problems, in this paper, we utilize an acoustic reference encoder to implicitly model singing characteristics. We experiment with different auxiliary features, including mel spectrograms, HuBERT, and the middle hidden feature (PPG-Mid) of pretrained automatic speech recognition (ASR) model, as the input of the reference encoder, and finally find the HuBERT feature is the best choice. In addition, we use contrastive predictive coding (CPC) module to further smooth the voices by predicting future observations in latent space. Experiments show that, compared with the baseline models, our proposed model can significantly improve the naturalness of converted singing voices and the similarity with the target singer. Moreover, our proposed model can also make the speakers with just speech data sing.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
260,030
1705.03146
CHAM: action recognition using convolutional hierarchical attention model
Recently, the soft attention mechanism, which was originally proposed in language processing, has been applied in computer vision tasks like image captioning. This paper presents improvements to the soft attention model by combining a convolutional LSTM with a hierarchical system architecture to recognize action categories in videos. We call this model the Convolutional Hierarchical Attention Model (CHAM). The model applies a convolutional operation inside the LSTM cell and an attention map generation process to recognize actions. The hierarchical architecture of this model is able to explicitly reason on multi-granularities of action categories. The proposed architecture achieved improved results on three publicly available datasets: the UCF sports dataset, the Olympic sports dataset and the HMDB51 dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
73,125
2203.11977
YouTube over Google's QUIC vs Internet Middleboxes: A Tug of War between Protocol Sustainability and Application QoE
Middleboxes such as web proxies, firewalls, etc. are widely deployed in today's network infrastructure. As a result, most protocols need to adapt their behavior to co-exist. One of the most commonly used transport protocols, QUIC, adapts to such middleboxes by falling back to TCP, where they block it. In this paper, we argue that the blind fallback behavior of QUIC, i.e., not distinguishing between failures caused by middleboxes and that caused by network congestion, hugely impacts the performance of QUIC. For this, we focus on YouTube video streaming and conduct a measurement study by utilizing production endpoints of YouTube by enabling TCP and QUIC at a time. In total, we collect over 2600 streaming hours of data over various bandwidth patterns, from 5 different geographical locations and various video genres. To our surprise, we observe that the legacy setup (TCP) either outperforms or performs the same as the QUIC-enabled browser for more than 60% of cases. We see that our observation is consistent across individual QoE parameters, bandwidth patterns, locations, and videos. Next, we conduct a deep-dive analysis to discover the root cause behind such behavior. We find a good correlation (0.3-0.7) between fallback and QoE drop events, i.e., quality drop and re-buffering or stalling. We further perform Granger causal analysis and find that fallback Granger causes either quality drop or stalling for 70% of the QUIC-enabled sessions. We believe our study will help designers revisit the decision to enable fallback in QUIC and distinguish between the packet drops caused by middleboxes and network congestion.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
287,100
2501.11553
Clinically Ready Magnetic Microrobots for Targeted Therapies
Systemic drug administration often causes off-target effects limiting the efficacy of advanced therapies. Targeted drug delivery approaches increase local drug concentrations at the diseased site while minimizing systemic drug exposure. We present a magnetically guided microrobotic drug delivery system capable of precise navigation under physiological conditions. This platform integrates a clinical electromagnetic navigation system, a custom-designed release catheter, and a dissolvable capsule for accurate therapeutic delivery. In vitro tests showed precise navigation in human vasculature models, and in vivo experiments confirmed tracking under fluoroscopy and successful navigation in large animal models. The microrobot balances magnetic material concentration, contrast agent loading, and therapeutic drug capacity, enabling effective hosting of therapeutics despite the integration complexity of its components, offering a promising solution for precise targeted drug delivery.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
525,965
1301.2629
Upper-Bounding the Capacity of Relay Communications - Part I
This paper focuses on the capacity of point-to-point relay communications wherein the transmitter is assisted by an intermediate relay. We detail the mathematical model of cutset and amplify and forward (AF) relaying strategy. We present the upper bound capacity of each relaying strategy from information theory viewpoint and also in networks with Gaussian channels. We exemplify various outer region capacities of the addressed strategies with two different case studies. The results exhibit that in low signal-to-noise ratio (SNR) environments the cutset performance is better than amplify and forward strategy.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
21,022
2205.08147
Pairwise Comparison Network for Remote Sensing Scene Classification
Remote sensing scene classification aims to assign a specific semantic label to a remote sensing image. Recently, convolutional neural networks have greatly improved the performance of remote sensing scene classification. However, some confused images may be easily recognized as the incorrect category, which generally degrade the performance. The differences between image pairs can be used to distinguish image categories. This paper proposed a pairwise comparison network, which contains two main steps: pairwise selection and pairwise representation. The proposed network first selects similar image pairs, and then represents the image pairs with pairwise representations. The self-representation is introduced to highlight the informative parts of each image itself, while the mutual-representation is proposed to capture the subtle differences between image pairs. Comprehensive experimental results on two challenging datasets (AID, NWPU-RESISC45) demonstrate the effectiveness of the proposed network. The codes are provided in https://github.com/spectralpublic/PCNet.git.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
296,834
2209.00302
Progressive Fusion for Multimodal Integration
Integration of multimodal information from various sources has been shown to boost the performance of machine learning models and thus has received increased attention in recent years. Often such models use deep modality-specific networks to obtain unimodal features which are combined to obtain "late-fusion" representations. However, these designs run the risk of information loss in the respective unimodal pipelines. On the other hand, "early-fusion" methodologies, which combine features early, suffer from the problems associated with feature heterogeneity and high sample complexity. In this work, we present an iterative representation refinement approach, called Progressive Fusion, which mitigates the issues with late fusion representations. Our model-agnostic technique introduces backward connections that make late stage fused representations available to early layers, improving the expressiveness of the representations at those stages, while retaining the advantages of late fusion designs. We test Progressive Fusion on tasks including affective sentiment detection, multimedia analysis, and time series fusion with different models, demonstrating its versatility. We show that our approach consistently improves performance, for instance attaining a 5% reduction in MSE and 40% improvement in robustness on multimodal time series prediction.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
315,547
2212.08546
Estimating truncation effects of quantum bosonic systems using sampling algorithms
To simulate bosons on a qubit- or qudit-based quantum computer, one has to regularize the theory by truncating infinite-dimensional local Hilbert spaces to finite dimensions. In the search for practical quantum applications, it is important to know how big the truncation errors can be. In general, it is not easy to estimate errors unless we have a good quantum computer. In this paper, we show that traditional sampling methods on classical devices, specifically Markov Chain Monte Carlo, can address this issue for a rather generic class of bosonic systems with a reasonable amount of computational resources available today. As a demonstration, we apply this idea to the scalar field theory on a two-dimensional lattice, with a size that goes beyond what is achievable using exact diagonalization methods. This method can be used to estimate the resources needed for realistic quantum simulations of bosonic theories, and also, to check the validity of the results of the corresponding quantum simulations.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
336,787
2205.01886
P^3 Ranker: Mitigating the Gaps between Pre-training and Ranking Fine-tuning with Prompt-based Learning and Pre-finetuning
Compared to other language tasks, applying pre-trained language models (PLMs) for search ranking often requires more nuances and training signals. In this paper, we identify and study the two mismatches between pre-training and ranking fine-tuning: the training schema gap regarding the differences in training objectives and model architectures, and the task knowledge gap considering the discrepancy between the knowledge needed in ranking and that learned during pre-training. To mitigate these gaps, we propose Pre-trained, Prompt-learned and Pre-finetuned Neural Ranker (P^3 Ranker). P^3 Ranker leverages prompt-based learning to convert the ranking task into a pre-training like schema and uses pre-finetuning to initialize the model on intermediate supervised tasks. Experiments on MS MARCO and Robust04 show the superior performances of P^3 Ranker in few-shot ranking. Analyses reveal that P^3 Ranker is able to better accustom to the ranking task through prompt-based learning and retrieve necessary ranking-oriented knowledge gleaned in pre-finetuning, resulting in data-efficient PLM adaptation. Our code is available at https://github.com/NEUIR/P3Ranker.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
294,751
2004.11898
Adversarial Machine Learning in Network Intrusion Detection Systems
Adversarial examples are inputs to a machine learning system intentionally crafted by an attacker to fool the model into producing an incorrect output. These examples have achieved a great deal of success in several domains such as image recognition, speech recognition and spam detection. In this paper, we study the nature of the adversarial problem in Network Intrusion Detection Systems (NIDS). We focus on the attack perspective, which includes techniques to generate adversarial examples capable of evading a variety of machine learning models. More specifically, we explore the use of evolutionary computation (particle swarm optimization and genetic algorithm) and deep learning (generative adversarial networks) as tools for adversarial example generation. To assess the performance of these algorithms in evading a NIDS, we apply them to two publicly available data sets, namely the NSL-KDD and UNSW-NB15, and we contrast them to a baseline perturbation method: Monte Carlo simulation. The results show that our adversarial example generation techniques cause high misclassification rates in eleven different machine learning models, along with a voting classifier. Our work highlights the vulnerability of machine learning based NIDS in the face of adversarial perturbation.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
true
false
false
174,061
2306.05212
RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit
Although Large Language Models (LLMs) have demonstrated extraordinary capabilities in many domains, they still have a tendency to hallucinate and generate fictitious responses to user requests. This problem can be alleviated by augmenting LLMs with information retrieval (IR) systems (also known as retrieval-augmented LLMs). Applying this strategy, LLMs can generate more factual texts in response to user input according to the relevant content retrieved by IR systems from external corpora as references. In addition, by incorporating external knowledge, retrieval-augmented LLMs can answer in-domain questions that cannot be answered by solely relying on the world knowledge stored in parameters. To support research in this area and facilitate the development of retrieval-augmented LLM systems, we develop RETA-LLM, a {RET}reival-{A}ugmented LLM toolkit. In RETA-LLM, we create a complete pipeline to help researchers and users build their customized in-domain LLM-based systems. Compared with previous retrieval-augmented LLM systems, RETA-LLM provides more plug-and-play modules to support better interaction between IR systems and LLMs, including {request rewriting, document retrieval, passage extraction, answer generation, and fact checking} modules. Our toolkit is publicly available at https://github.com/RUC-GSAI/YuLan-IR/tree/main/RETA-LLM.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
372,099
2108.10674
Density-Based Dynamic Curriculum Learning for Intent Detection
Pre-trained language models have achieved noticeable performance on the intent detection task. However, due to assigning an identical weight to each sample, they suffer from the overfitting of simple samples and the failure to learn complex samples well. To handle this problem, we propose a density-based dynamic curriculum learning model. Our model defines the sample's difficulty level according to their eigenvectors' density. In this way, we exploit the overall distribution of all samples' eigenvectors simultaneously. Then we apply a dynamic curriculum learning strategy, which pays distinct attention to samples of various difficulty levels and alters the proportion of samples during the training process. Through the above operation, simple samples are well-trained, and complex samples are enhanced. Experiments on three open datasets verify that the proposed density-based algorithm can distinguish simple and complex samples significantly. Besides, our model obtains obvious improvement over the strong baselines.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
251,971
2105.00777
Recognition of Oracle Bone Inscriptions by using Two Deep Learning Models
Oracle bone inscriptions (OBIs) contain some of the oldest characters in the world and were used in China about 3000 years ago. As an ancient form of literature, OBIs store a lot of information that can help us understand the world history, character evaluations, and more. However, as OBIs were found only discovered about 120 years ago, few studies have described them, and the aging process has made the inscriptions less legible. Hence, automatic character detection and recognition has become an important issue. This paper aims to design a online OBI recognition system for helping preservation and organization the cultural heritage. We evaluated two deep learning models for OBI recognition, and have designed an API that can be accessed online for OBI recognition. In the first stage, you only look once (YOLO) is applied for detecting and recognizing OBIs. However, not all of the OBIs can be detected correctly by YOLO, so we next utilize MobileNet to recognize the undetected OBIs by manually cropping the undetected OBI in the image. MobileNet is used for this second stage of recognition as our evaluation of ten state-of-the-art models showed that it is the best network for OBI recognition due to its superior performance in terms of accuracy, loss and time consumption. We installed our system on an application programming interface (API) and opened it for OBI detection and recognition.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
233,325
1307.0396
On Optimal Zero-Delay Coding of Vector Markov Sources
Optimal zero-delay coding (quantization) of a vector-valued Markov source driven by a noise process is considered. Using a stochastic control problem formulation, the existence and structure of optimal quantization policies are studied. For a finite-horizon problem with bounded per-stage distortion measure, the existence of an optimal zero-delay quantization policy is shown provided that the quantizers allowed are ones with convex codecells. The bounded distortion assumption is relaxed to cover cases that include the linear quadratic Gaussian problem. For the infinite horizon problem and a stationary Markov source the optimality of deterministic Markov coding policies is shown. The existence of optimal stationary Markov quantization policies is also shown provided randomization that is shared by the encoder and the decoder is allowed.
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
25,548
2203.02020
Nonlinear predictive models computation in ADPCM schemes
Recently several papers have been published on nonlinear prediction applied to speech coding. At ICASSP98 we presented a system based on an ADPCM scheme with a nonlinear predictor based on a neural net. The most critical parameter was the training procedure in order to achieve good generalization capability and robustness against mismatch between training and testing conditions. In this paper, we propose several new approaches that improve the performance of the original system in up to 1.2dB of SEGSNR (using bayesian regularization). The variance of the SEGSNR between frames is also minimized, so the new scheme produces a more stable quality of the output.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
283,599
1711.03440
Learning Non-overlapping Convolutional Neural Networks with Multiple Kernels
In this paper, we consider parameter recovery for non-overlapping convolutional neural networks (CNNs) with multiple kernels. We show that when the inputs follow Gaussian distribution and the sample size is sufficiently large, the squared loss of such CNNs is $\mathit{~locally~strongly~convex}$ in a basin of attraction near the global optima for most popular activation functions, like ReLU, Leaky ReLU, Squared ReLU, Sigmoid and Tanh. The required sample complexity is proportional to the dimension of the input and polynomial in the number of kernels and a condition number of the parameters. We also show that tensor methods are able to initialize the parameters to the local strong convex region. Hence, for most smooth activations, gradient descent following tensor initialization is guaranteed to converge to the global optimal with time that is linear in input dimension, logarithmic in precision and polynomial in other factors. To the best of our knowledge, this is the first work that provides recovery guarantees for CNNs with multiple kernels under polynomial sample and computational complexities.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
84,211
1407.1543
Dictionary Learning and Tensor Decomposition via the Sum-of-Squares Method
We give a new approach to the dictionary learning (also known as "sparse coding") problem of recovering an unknown $n\times m$ matrix $A$ (for $m \geq n$) from examples of the form \[ y = Ax + e, \] where $x$ is a random vector in $\mathbb R^m$ with at most $\tau m$ nonzero coordinates, and $e$ is a random noise vector in $\mathbb R^n$ with bounded magnitude. For the case $m=O(n)$, our algorithm recovers every column of $A$ within arbitrarily good constant accuracy in time $m^{O(\log m/\log(\tau^{-1}))}$, in particular achieving polynomial time if $\tau = m^{-\delta}$ for any $\delta>0$, and time $m^{O(\log m)}$ if $\tau$ is (a sufficiently small) constant. Prior algorithms with comparable assumptions on the distribution required the vector $x$ to be much sparser---at most $\sqrt{n}$ nonzero coordinates---and there were intrinsic barriers preventing these algorithms from applying for denser $x$. We achieve this by designing an algorithm for noisy tensor decomposition that can recover, under quite general conditions, an approximate rank-one decomposition of a tensor $T$, given access to a tensor $T'$ that is $\tau$-close to $T$ in the spectral norm (when considered as a matrix). To our knowledge, this is the first algorithm for tensor decomposition that works in the constant spectral-norm noise regime, where there is no guarantee that the local optima of $T$ and $T'$ have similar structures. Our algorithm is based on a novel approach to using and analyzing the Sum of Squares semidefinite programming hierarchy (Parrilo 2000, Lasserre 2001), and it can be viewed as an indication of the utility of this very general and powerful tool for unsupervised learning problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
34,452
2308.05104
Scene-Generalizable Interactive Segmentation of Radiance Fields
Existing methods for interactive segmentation in radiance fields entail scene-specific optimization and thus cannot generalize across different scenes, which greatly limits their applicability. In this work we make the first attempt at Scene-Generalizable Interactive Segmentation in Radiance Fields (SGISRF) and propose a novel SGISRF method, which can perform 3D object segmentation for novel (unseen) scenes represented by radiance fields, guided by only a few interactive user clicks in a given set of multi-view 2D images. In particular, the proposed SGISRF focuses on addressing three crucial challenges with three specially designed techniques. First, we devise the Cross-Dimension Guidance Propagation to encode the scarce 2D user clicks into informative 3D guidance representations. Second, the Uncertainty-Eliminated 3D Segmentation module is designed to achieve efficient yet effective 3D segmentation. Third, Concealment-Revealed Supervised Learning scheme is proposed to reveal and correct the concealed 3D segmentation errors resulted from the supervision in 2D space with only 2D mask annotations. Extensive experiments on two real-world challenging benchmarks covering diverse scenes demonstrate 1) effectiveness and scene-generalizability of the proposed method, 2) favorable performance compared to classical method requiring scene-specific optimization.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
384,683
2410.09735
Flexible Operation of Electricity-HCNG Networks with Variable Hydrogen Fraction: A Distributionally Robust Joint Chance-Constrained Approach
Hydrogen-enriched compressed natural gas (HCNG) is a promising way to utilize surplus renewable energy through hydrogen electrolysis and blending it into natural gas. However, the optimal hydrogen volume fraction (HVF) of HCNG varies following the daily fluctuations of renewable energy. Besides, facing the rapid volatility of renewable energy, ensuring rapid and reliable real-time adjustments is challenging for electricity-HCNG (E-HCNG) coupling networks. To this end, this paper proposes a flexible operation framework for electricity-HCNG (E-HCNG) networks against the fluctuations and volatility of renewable energy. Based on operations with variable HVF, the framework developed an E-HCNG system-level affine policy, which allows real-time re-dispatch of operations according to the volatility. Meanwhile, to guarantee the operational reliability of the affine policy, a distributionally robust joint chance constraint (DRJCC) is introduced, which limits the violation probability of operational constraints under the uncertainties of renewable energy volatility. Furthermore, in the solving process, to mitigate the over-conservation in DRJCC decomposition, an improved risk allocation method is proposed, utilizing the correlations among violations under the affine policy. Moreover, to tackle the non-convexities arising from the variable HVF, customized approximations for HCNG flow formulations are developed. The problem is finally reformulated into a mix-integer second-order cone programming problem. The effectiveness of the proposed method is validated both in small-scale and large-scale experiments.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
497,746
2309.14088
REPA: Client Clustering without Training and Data Labels for Improved Federated Learning in Non-IID Settings
Clustering clients into groups that exhibit relatively homogeneous data distributions represents one of the major means of improving the performance of federated learning (FL) in non-independent and identically distributed (non-IID) data settings. Yet, the applicability of current state-of-the-art approaches remains limited as these approaches cluster clients based on information, such as the evolution of local model parameters, that is only obtainable through actual on-client training. On the other hand, there is a need to make FL models available to clients who are not able to perform the training themselves, as they do not have the processing capabilities required for training, or simply want to use the model without participating in the training. Furthermore, the existing alternative approaches that avert the training still require that individual clients have a sufficient amount of labeled data upon which the clustering is based, essentially assuming that each client is a data annotator. In this paper, we present REPA, an approach to client clustering in non-IID FL settings that requires neither training nor labeled data collection. REPA uses a novel supervised autoencoder-based method to create embeddings that profile a client's underlying data-generating processes without exposing the data to the server and without requiring local training. Our experimental analysis over three different datasets demonstrates that REPA delivers state-of-the-art model performance while expanding the applicability of cluster-based FL to previously uncovered use cases.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
394,463
1604.07318
Impact of Mobility on the Sum Rate of NB-OFDMA Based Mobile IoT Networks
In future Internet of Things (IoT) networks, the explosive growth of mobile devices compel us to reconsider the effectiveness of the current frequency-division multiple access (FDMA) schemes. Devices' differentiated mobility features and diversified scattering environments make it more complicated to characterize the multi-user interference. In this paper, we thoroughly analyze the impacts of devices' mobility on the inter-sub-carrier interference (ICI) in an IoT system based on the 3GPP narrow-band orthogonal frequency-division multiple access (NB-OFDMA) protocol, and obtain the relationship between the system sum-rate and devices' mobility. Our results may shed some lights on the system design under the mobile scenarios.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
55,074
2502.09363
The Accuracy Cost of Weakness: A Theoretical Analysis of Fixed-Segment Weak Labeling for Events in Time
Accurate labels are critical for deriving robust machine learning models. Labels are used to train supervised learning models and to evaluate most machine learning paradigms. In this paper, we model the accuracy and cost of a common weak labeling process where annotators assign presence or absence labels to fixed-length data segments for a given event class. The annotator labels a segment as "present" if it sufficiently covers an event from that class, e.g., a birdsong sound event in audio data. We analyze how the segment length affects the label accuracy and the required number of annotations, and compare this fixed-length labeling approach with an oracle method that uses the true event activations to construct the segments. Furthermore, we quantify the gap between these methods and verify that in most realistic scenarios the oracle method is better than the fixed-length labeling method in both accuracy and cost. Our findings provide a theoretical justification for adaptive weak labeling strategies that mimic the oracle process, and a foundation for optimizing weak labeling processes in sequence labeling tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
533,424
1903.03438
Towards a Framework to Manage Perceptual Uncertainty for Safe Automated Driving
Perception is a safety-critical function of autonomous vehicles and machine learning (ML) plays a key role in its implementation. This position paper identifies (1) perceptual uncertainty as a performance measure used to define safety requirements and (2) its influence factors when using supervised ML. This work is a first step towards a framework for measuring and controling the effects of these factors and supplying evidence to support claims about perceptual uncertainty.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
123,732
1401.6123
Secrecy Transmission Capacity in Noisy Wireless Ad Hoc Networks
This paper considers the transmission of confidential messages over noisy wireless ad hoc networks, where both background noise and interference from concurrent transmitters affect the received signals. For the random networks where the legitimate nodes and the eavesdroppers are distributed as Poisson point processes, we study the secrecy transmission capacity (STC), as well as the connection outage probability and secrecy outage probability, based on the physical layer security. We first consider the basic fixed transmission distance model, and establish a theoretical model of the STC. We then extend the above results to a more realistic random distance transmission model, namely nearest receiver transmission. Finally, extensive simulation and numerical results are provided to validate the efficiency of our theoretical results and illustrate how the STC is affected by noise, connection and secrecy outage probabilities, transmitter and eavesdropper densities, and other system parameters. Remarkably, our results reveal that a proper amount of noise is helpful to the secrecy transmission capacity.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
30,302
2305.13752
Pulling Target to Source: A New Perspective on Domain Adaptive Semantic Segmentation
Domain adaptive semantic segmentation aims to transfer knowledge from a labeled source domain to an unlabeled target domain. However, existing methods primarily focus on directly learning qualified target features, making it challenging to guarantee their discrimination in the absence of target labels. This work provides a new perspective. We observe that the features learned with source data manage to keep categorically discriminative during training, thereby enabling us to implicitly learn adequate target representations by simply \textbf{pulling target features close to source features for each category}. To this end, we propose T2S-DA, which we interpret as a form of pulling Target to Source for Domain Adaptation, encouraging the model in learning similar cross-domain features. Also, considering the pixel categories are heavily imbalanced for segmentation datasets, we come up with a dynamic re-weighting strategy to help the model concentrate on those underperforming classes. Extensive experiments confirm that T2S-DA learns a more discriminative and generalizable representation, significantly surpassing the state-of-the-art. We further show that our method is quite qualified for the domain generalization task, verifying its domain-invariant property.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
366,665
2202.08898
Word Embeddings for Automatic Equalization in Audio Mixing
In recent years, machine learning has been widely adopted to automate the audio mixing process. Automatic mixing systems have been applied to various audio effects such as gain-adjustment, equalization, and reverberation. These systems can be controlled through visual interfaces, providing audio examples, using knobs, and semantic descriptors. Using semantic descriptors or textual information to control these systems is an effective way for artists to communicate their creative goals. In this paper, we explore the novel idea of using word embeddings to represent semantic descriptors. Word embeddings are generally obtained by training neural networks on large corpora of written text. These embeddings serve as the input layer of the neural network to create a translation from words to EQ settings. Using this technique, the machine learning model can also generate EQ settings for semantic descriptors that it has not seen before. We compare the EQ settings of humans with the predictions of the neural network to evaluate the quality of predictions. The results showed that the embedding layer enables the neural network to understand semantic descriptors. We observed that the models with embedding layers perform better than those without embedding layers, but still not as good as human labels.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
281,016
2210.17237
Latent Multimodal Functional Graphical Model Estimation
Joint multimodal functional data acquisition, where functional data from multiple modes are measured simultaneously from the same subject, has emerged as an exciting modern approach enabled by recent engineering breakthroughs in the neurological and biological sciences. One prominent motivation to acquire such data is to enable new discoveries of the underlying connectivity by combining multimodal signals. Despite the scientific interest, there remains a gap in principled statistical methods for estimating the graph underlying multimodal functional data. To this end, we propose a new integrative framework that models the data generation process and identifies operators mapping from the observation space to the latent space. We then develop an estimator that simultaneously estimates the transformation operators and the latent graph. This estimator is based on the partial correlation operator, which we rigorously extend from the multivariate to the functional setting. Our procedure is provably efficient, with the estimator converging to a stationary point with quantifiable statistical error. Furthermore, we show recovery of the latent graph under mild conditions. Our work is applied to analyze simultaneously acquired multimodal brain imaging data where the graph indicates functional connectivity of the brain. We present simulation and empirical results that support the benefits of joint estimation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
327,616
2006.00700
When Machine Learning Meets Multiscale Modeling in Chemical Reactions
Due to the intrinsic complexity and nonlinearity of chemical reactions, direct applications of traditional machine learning algorithms may face with many difficulties. In this study, through two concrete examples with biological background, we illustrate how the key ideas of multiscale modeling can help to reduce the computational cost of machine learning a lot, as well as how machine learning algorithms perform model reduction automatically in a time-scale separated system. Our study highlights the necessity and effectiveness of an integration of machine learning algorithms and multiscale modeling during the study of chemical reactions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
179,548
1701.01911
Random Sampling for Fast Face Sketch Synthesis
Exemplar-based face sketch synthesis plays an important role in both digital entertainment and law enforcement. It generally consists of two parts: neighbor selection and reconstruction weight representation. The most time-consuming or main computation complexity for exemplar-based face sketch synthesis methods lies in the neighbor selection process. State-of-the-art face sketch synthesis methods perform neighbor selection online in a data-driven manner by $K$ nearest neighbor ($K$-NN) searching. Actually, the online search increases the time consuming for synthesis. Moreover, since these methods need to traverse the whole training dataset for neighbor selection, the computational complexity increases with the scale of the training database and hence these methods have limited scalability. In this paper, we proposed a simple but effective offline random sampling in place of online $K$-NN search to improve the synthesis efficiency. Extensive experiments on public face sketch databases demonstrate the superiority of the proposed method in comparison to state-of-the-art methods, in terms of both synthesis quality and time consumption. The proposed method could be extended to other heterogeneous face image transformation problems such as face hallucination. We release the source codes of our proposed methods and the evaluation metrics for future study online: http://www.ihitworld.com/RSLCR.html.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
66,472
1606.06871
A Comprehensive Study of Deep Bidirectional LSTM RNNs for Acoustic Modeling in Speech Recognition
We present a comprehensive study of deep bidirectional long short-term memory (LSTM) recurrent neural network (RNN) based acoustic models for automatic speech recognition (ASR). We study the effect of size and depth and train models of up to 8 layers. We investigate the training aspect and study different variants of optimization methods, batching, truncated backpropagation, different regularization techniques such as dropout and $L_2$ regularization, and different gradient clipping variants. The major part of the experimental analysis was performed on the Quaero corpus. Additional experiments also were performed on the Switchboard corpus. Our best LSTM model has a relative improvement in word error rate of over 14\% compared to our best feed-forward neural network (FFNN) baseline on the Quaero task. On this task, we get our best result with an 8 layer bidirectional LSTM and we show that a pretraining scheme with layer-wise construction helps for deep LSTMs. Finally we compare the training calculation time of many of the presented experiments in relation with recognition performance. All the experiments were done with RETURNN, the RWTH extensible training framework for universal recurrent neural networks in combination with RASR, the RWTH ASR toolkit.
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
57,624
2309.12913
A matter of attitude: Focusing on positive and active gradients to boost saliency maps
Saliency maps have become one of the most widely used interpretability techniques for convolutional neural networks (CNN) due to their simplicity and the quality of the insights they provide. However, there are still some doubts about whether these insights are a trustworthy representation of what CNNs use to come up with their predictions. This paper explores how rescuing the sign of the gradients from the saliency map can lead to a deeper understanding of multi-class classification problems. Using both pretrained and trained from scratch CNNs we unveil that considering the sign and the effect not only of the correct class, but also the influence of the other classes, allows to better identify the pixels of the image that the network is really focusing on. Furthermore, how occluding or altering those pixels is expected to affect the outcome also becomes clearer.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
393,975
2406.03470
SpikeZIP-TF: Conversion is All You Need for Transformer-based SNN
Spiking neural network (SNN) has attracted great attention due to its characteristic of high efficiency and accuracy. Currently, the ANN-to-SNN conversion methods can obtain ANN on-par accuracy SNN with ultra-low latency (8 time-steps) in CNN structure on computer vision (CV) tasks. However, as Transformer-based networks have achieved prevailing precision on both CV and natural language processing (NLP), the Transformer-based SNNs are still encounting the lower accuracy w.r.t the ANN counterparts. In this work, we introduce a novel ANN-to-SNN conversion method called SpikeZIP-TF, where ANN and SNN are exactly equivalent, thus incurring no accuracy degradation. SpikeZIP-TF achieves 83.82% accuracy on CV dataset (ImageNet) and 93.79% accuracy on NLP dataset (SST-2), which are higher than SOTA Transformer-based SNNs. The code is available in GitHub: https://github.com/Intelligent-Computing-Research-Group/SpikeZIP_transformer
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
461,253
2403.01449
DUFOMap: Efficient Dynamic Awareness Mapping
The dynamic nature of the real world is one of the main challenges in robotics. The first step in dealing with it is to detect which parts of the world are dynamic. A typical benchmark task is to create a map that contains only the static part of the world to support, for example, localization and planning. Current solutions are often applied in post-processing, where parameter tuning allows the user to adjust the setting for a specific dataset. In this paper, we propose DUFOMap, a novel dynamic awareness mapping framework designed for efficient online processing. Despite having the same parameter settings for all scenarios, it performs better or is on par with state-of-the-art methods. Ray casting is utilized to identify and classify fully observed empty regions. Since these regions have been observed empty, it follows that anything inside them at another time must be dynamic. Evaluation is carried out in various scenarios, including outdoor environments in KITTI and Argoverse 2, open areas on the KTH campus, and with different sensor types. DUFOMap outperforms the state of the art in terms of accuracy and computational efficiency. The source code, benchmarks, and links to the datasets utilized are provided. See https://kth-rpl.github.io/dufomap for more details.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
434,422
2401.10786
Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion
Directly generating scenes from satellite imagery offers exciting possibilities for integration into applications like games and map services. However, challenges arise from significant view changes and scene scale. Previous efforts mainly focused on image or video generation, lacking exploration into the adaptability of scene generation for arbitrary views. Existing 3D generation works either operate at the object level or are difficult to utilize the geometry obtained from satellite imagery. To overcome these limitations, we propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques. Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner. The representation can be utilized to render arbitrary views which would excel in both single-frame quality and inter-frame consistency. Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
422,773
2011.06283
Fed-Focal Loss for imbalanced data classification in Federated Learning
The Federated Learning setting has a central server coordinating the training of a model on a network of devices. One of the challenges is variable training performance when the dataset has a class imbalance. In this paper, we address this by introducing a new loss function called Fed-Focal Loss. We propose to address the class imbalance by reshaping cross-entropy loss such that it down-weights the loss assigned to well-classified examples along the lines of focal loss. Additionally, by leveraging a tunable sampling framework, we take into account selective client model contributions on the central server to further focus the detector during training and hence improve its robustness. Using a detailed experimental analysis with the VIRTUAL (Variational Federated Multi-Task Learning) approach, we demonstrate consistently superior performance in both the balanced and unbalanced scenarios for MNIST, FEMNIST, VSN and HAR benchmarks. We obtain a more than 9% (absolute percentage) improvement in the unbalanced MNIST benchmark. We further show that our technique can be adopted across multiple Federated Learning algorithms to get improvements.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
206,198
2005.13877
The optimal sequence for reset controllers
PID controllers cannot satisfy the high performance requirements since they are restricted by the water-bed effect. Thus, the need for a better alternative to linear PID controllers increases due to the rising demands of the high-tech industry. This has led many researchers to explore nonlinear controllers like reset control. Although reset controllers have been widely used to overcome the limitations of linear controllers in literature, the performance of the system varies depending on the relative sequence of controller linear and nonlinear parts. In this paper, the optimal sequence is found using high order sinusoidal input describing functions (HOSIDF). By arranging controller parts according to this strategy, better performance in the sense of precision and control input is achieved. The performance of the proposed sequence is validated on a precision positioning setup. The experimental results demonstrate that the optimal sequence found in theory outperforms other sequences.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
179,126
1604.05129
Memory shapes time perception and intertemporal choices
There is a consensus that human and non-human subjects experience temporal distortions in many stages of their perceptual and decision-making systems. Similarly, intertemporal choice research has shown that decision-makers undervalue future outcomes relative to immediate ones. Here we combine techniques from information theory and artificial intelligence to show how both temporal distortions and intertemporal choice preferences can be explained as a consequence of the coding efficiency of sensorimotor representation. In particular, the model implies that interactions that constrain future behavior are perceived as being both longer in duration and more valuable. Furthermore, using simulations of artificial agents, we investigate how memory constraints enforce a renormalization of the perceived timescales. Our results show that qualitatively different discount functions, such as exponential and hyperbolic discounting, arise as a consequence of an agent's probabilistic model of the world.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
54,768
1512.04038
An Uncertainty-Aware Approach for Exploratory Microblog Retrieval
Although there has been a great deal of interest in analyzing customer opinions and breaking news in microblogs, progress has been hampered by the lack of an effective mechanism to discover and retrieve data of interest from microblogs. To address this problem, we have developed an uncertainty-aware visual analytics approach to retrieve salient posts, users, and hashtags. We extend an existing ranking technique to compute a multifaceted retrieval result: the mutual reinforcement rank of a graph node, the uncertainty of each rank, and the propagation of uncertainty among different graph nodes. To illustrate the three facets, we have also designed a composite visualization with three visual components: a graph visualization, an uncertainty glyph, and a flow map. The graph visualization with glyphs, the flow map, and the uncertainty analysis together enable analysts to effectively find the most uncertain results and interactively refine them. We have applied our approach to several Twitter datasets. Qualitative evaluation and two real-world case studies demonstrate the promise of our approach for retrieving high-quality microblog data.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
50,090
1812.01285
Rare Event Detection using Disentangled Representation Learning
This paper presents a novel method for rare event detection from an image pair with class-imbalanced datasets. A straightforward approach for event detection tasks is to train a detection network from a large-scale dataset in an end-to-end manner. However, in many applications such as building change detection on satellite images, few positive samples are available for the training. Moreover, scene image pairs contain many trivial events, such as in illumination changes or background motions. These many trivial events and the class imbalance problem lead to false alarms for rare event detection. In order to overcome these difficulties, we propose a novel method to learn disentangled representations from only low-cost negative samples. The proposed method disentangles different aspects in a pair of observations: variant and invariant factors that represent trivial events and image contents, respectively. The effectiveness of the proposed approach is verified by the quantitative evaluations on four change detection datasets, and the qualitative analysis shows that the proposed method can acquire the representations that disentangle rare events from trivial ones.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
115,493
2408.16862
Probabilistic Decomposed Linear Dynamical Systems for Robust Discovery of Latent Neural Dynamics
Time-varying linear state-space models are powerful tools for obtaining mathematically interpretable representations of neural signals. For example, switching and decomposed models describe complex systems using latent variables that evolve according to simple locally linear dynamics. However, existing methods for latent variable estimation are not robust to dynamical noise and system nonlinearity due to noise-sensitive inference procedures and limited model formulations. This can lead to inconsistent results on signals with similar dynamics, limiting the model's ability to provide scientific insight. In this work, we address these limitations and propose a probabilistic approach to latent variable estimation in decomposed models that improves robustness against dynamical noise. Additionally, we introduce an extended latent dynamics model to improve robustness against system nonlinearities. We evaluate our approach on several synthetic dynamical systems, including an empirically-derived brain-computer interface experiment, and demonstrate more accurate latent variable inference in nonlinear systems with diverse noise conditions. Furthermore, we apply our method to a real-world clinical neurophysiology dataset, illustrating the ability to identify interpretable and coherent structure where previous models cannot.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
484,475
2403.01748
NeuSpeech: Decode Neural signal as Speech
Decoding language from brain dynamics is an important open direction in the realm of brain-computer interface (BCI), especially considering the rapid growth of large language models. Compared to invasive-based signals which require electrode implantation surgery, non-invasive neural signals (e.g. EEG, MEG) have attracted increasing attention considering their safety and generality. However, the exploration is not adequate in three aspects: 1) previous methods mainly focus on EEG but none of the previous works address this problem on MEG with better signal quality; 2) prior works have predominantly used $``teacher-forcing"$ during generative decoding, which is impractical; 3) prior works are mostly $``BART-based"$ not fully auto-regressive, which performs better in other sequence tasks. In this paper, we explore the brain-to-text translation of MEG signals in a speech-decoding formation. Here we are the first to investigate a cross-attention-based ``whisper" model for generating text directly from MEG signals without teacher forcing. Our model achieves impressive BLEU-1 scores of 60.30 and 52.89 without pretraining $\&$ teacher-forcing on two major datasets ($\textit{GWilliams}$ and $\textit{Schoffelen}$). This paper conducts a comprehensive review to understand how speech decoding formation performs on the neural decoding tasks, including pretraining initialization, training $\&$ evaluation set splitting, augmentation, and scaling law. Code is available at https://github.com/NeuSpeech/NeuSpeech1$.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
434,555
2501.15235
Large-Scale Riemannian Meta-Optimization via Subspace Adaptation
Riemannian meta-optimization provides a promising approach to solving non-linear constrained optimization problems, which trains neural networks as optimizers to perform optimization on Riemannian manifolds. However, existing Riemannian meta-optimization methods take up huge memory footprints in large-scale optimization settings, as the learned optimizer can only adapt gradients of a fixed size and thus cannot be shared across different Riemannian parameters. In this paper, we propose an efficient Riemannian meta-optimization method that significantly reduces the memory burden for large-scale optimization via a subspace adaptation scheme. Our method trains neural networks to individually adapt the row and column subspaces of Riemannian gradients, instead of directly adapting the full gradient matrices in existing Riemannian meta-optimization methods. In this case, our learned optimizer can be shared across Riemannian parameters with different sizes. Our method reduces the model memory consumption by six orders of magnitude when optimizing an orthogonal mainstream deep neural network (e.g., ResNet50). Experiments on multiple Riemannian tasks show that our method can not only reduce the memory consumption but also improve the performance of Riemannian meta-optimization.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
527,460
2307.12138
SCPAT-GAN: Structural Constrained and Pathology Aware Convolutional Transformer-GAN for Virtual Histology Staining of Human Coronary OCT images
There is a significant need for the generation of virtual histological information from coronary optical coherence tomography (OCT) images to better guide the treatment of coronary artery disease. However, existing methods either require a large pixel-wisely paired training dataset or have limited capability to map pathological regions. To address these issues, we proposed a structural constrained, pathology aware, transformer generative adversarial network, namely SCPAT-GAN, to generate virtual stained H&E histology from OCT images. The proposed SCPAT-GAN advances existing methods via a novel design to impose pathological guidance on structural layers using transformer-based network.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
381,151
2501.13576
Federated Conformance Checking
Conformance checking is a crucial aspect of process mining, where the main objective is to compare the actual execution of a process, as recorded in an event log, with a reference process model, e.g., in the form of a Petri net or a BPMN. Conformance checking enables identifying deviations, anomalies, or non-compliance instances. It offers different perspectives on problems in processes, bottlenecks, or process instances that are not compliant with the model. Performing conformance checking in federated (inter-organizational) settings allows organizations to gain insights into the overall process execution and to identify compliance issues across organizational boundaries, which facilitates process improvement efforts among collaborating entities. In this paper, we propose a privacy-aware federated conformance-checking approach that allows for evaluating the correctness of overall cross-organizational process models, identifying miscommunications, and quantifying their costs. For evaluation, we design and simulate a supply chain process with three organizations engaged in purchase-to-pay, order-to-cash, and shipment processes. We generate synthetic event logs for each organization as well as the complete process, and we apply our approach to identify and evaluate the cost of pre-injected miscommunications.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
526,742
1008.1140
On Two Strong Converse Theorems for Stationary Discrete Memoryless Channels
In 1973, Arimoto proved the strong converse theorem for the discrete memoryless channels stating that when transmission rate $R$ is above channel capacity $C$, the error probability of decoding goes to one as the block length $n$ of code word tends to infinity. He proved the theorem by deriving the exponent function of error probability of correct decoding that is positive if and only if $R>C$. Subsequently, in 1979, Dueck and K\"orner determined the optimal exponent of correct decoding. Arimoto's bound has been said to be equal to the bound of Dueck and K\"orner. However its rigorous proof has not been presented so far. In this paper we give a rigorous proof of the equivalence of Arimoto's bound to that of Dueck and K\"orner.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
7,207
2412.10489
CognitionCapturer: Decoding Visual Stimuli From Human EEG Signal With Multimodal Information
Electroencephalogram (EEG) signals have attracted significant attention from researchers due to their non-invasive nature and high temporal sensitivity in decoding visual stimuli. However, most recent studies have focused solely on the relationship between EEG and image data pairs, neglecting the valuable ``beyond-image-modality" information embedded in EEG signals. This results in the loss of critical multimodal information in EEG. To address this limitation, we propose CognitionCapturer, a unified framework that fully leverages multimodal data to represent EEG signals. Specifically, CognitionCapturer trains Modality Expert Encoders for each modality to extract cross-modal information from the EEG modality. Then, it introduces a diffusion prior to map the EEG embedding space to the CLIP embedding space, followed by using a pretrained generative model, the proposed framework can reconstruct visual stimuli with high semantic and structural fidelity. Notably, the framework does not require any fine-tuning of the generative models and can be extended to incorporate more modalities. Through extensive experiments, we demonstrate that CognitionCapturer outperforms state-of-the-art methods both qualitatively and quantitatively. Code: https://github.com/XiaoZhangYES/CognitionCapturer.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
516,962
2304.07838
Discrete-Time State-Feedback Controller with Canonical Form on Inverted Pendulum (on a cart)
The scope of inverted pendulum has been widely studied as one of the notable research with respect to standing in balance. The concept of this pendulum is similar to missile guidance, meaning that the center of drag is ahead that of gravity. Mathematical model of inverted pendulum on a cart is moreover presented in this paper. Various rewarding parameters are proposed from the displacement of the pivot, angular rotation, to external force exerted on the carriage so as to gain its equilibrium points and the linearized systems. Due to the severe risk of instability, a reliable closed-loop state feedback controller is designed to stabilize in upright position, even with large deviations. The specific concept proposed is to apply the canonical form of computing the determinant of gain $K$ leading to $K_d$. The results show that the constructed design can maintain the stability of the system by applying three sorts of initial condition and choosing sampling time $T$ under $0.2$ with small possible degrading performance.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
358,490
1703.04006
Waveform Optimization for Radio-Frequency Wireless Power Transfer
In this paper, we study the waveform design problem for a single-input single-output (SISO) radio-frequency (RF) wireless power transfer (WPT) system in frequency-selective channels. First, based on the actual non-linear current-voltage model of the diode at the energy receiver, we derive a semi-closed-form expression for the deliverable DC voltage in terms of the incident RF signal and hence obtain the average harvested power. Next, by adopting a multisine waveform structure for the transmit signal of the energy transmitter, we jointly design the multisine signal amplitudes and phases overall frequency tones according to the channel state information (CSI) to maximize the deliverable DC voltage or harvested power. Although our formulated problem is non-convex and difficult to solve, we propose two suboptimal solutions to it, based on the frequency-domain maximal ratio transmission (MRT) principle and the sequential convex optimization (SCP) technique, respectively. Using various simulations, the performance gain of our solutions over the existing waveform designs is shown.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
69,815