id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2010.12948
DeepAtrophy: Teaching a Neural Network to Differentiate Progressive Changes from Noise on Longitudinal MRI in Alzheimer's Disease
Volume change measures derived from longitudinal MRI (e.g. hippocampal atrophy) are a well-studied biomarker of disease progression in Alzheimer's Disease (AD) and are used in clinical trials to track the therapeutic efficacy of disease-modifying treatments. However, longitudinal MRI change measures can be confounded by non-biological factors, such as different degrees of head motion and susceptibility artifact between pairs of MRI scans. We hypothesize that deep learning methods applied directly to pairs of longitudinal MRI scans can be trained to differentiate between biological changes and non-biological factors better than conventional approaches based on deformable image registration. To achieve this, we make a simplifying assumption that biological factors are associated with time (i.e. the hippocampus shrinks overtime in the aging population) whereas non-biological factors are independent of time. We then formulate deep learning networks to infer the temporal order of same-subject MRI scans input to the network in arbitrary order; as well as to infer ratios between interscan intervals for two pairs of same-subject MRI scans. In the test dataset, these networks perform better in tasks of temporal ordering (89.3%) and interscan interval inference (86.1%) than a state-of-the-art deformation-based morphometry method ALOHA (76.6% and 76.1% respectively) (Das et al., 2012). Furthermore, we derive a disease progression score from the network that is able to detect a group difference between 58 preclinical AD and 75 beta-amyloid-negative cognitively normal individuals within one year, compared to two years for ALOHA. This suggests that deep learning can be trained to differentiate MRI changes due to biological factors (tissue loss) from changes due to non-biological factors, leading to novel biomarkers that are more sensitive to longitudinal changes at the earliest stages of AD.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
202,941
1612.03225
Optimal Generalized Decision Trees via Integer Programming
Decision trees have been a very popular class of predictive models for decades due to their interpretability and good performance on categorical features. However, they are not always robust and tend to overfit the data. Additionally, if allowed to grow large, they lose interpretability. In this paper, we present a mixed integer programming formulation to construct optimal decision trees of a prespecified size. We take the special structure of categorical features into account and allow combinatorial decisions (based on subsets of values of features) at each node. Our approach can also handle numerical features via thresholding. We show that very good accuracy can be achieved with small trees using moderately-sized training sets. The optimization problems we solve are tractable with modern solvers.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
65,344
2207.08948
Multi-step domain adaptation by adversarial attack to $\mathcal{H} \Delta \mathcal{H}$-divergence
Adversarial examples are transferable between different models. In our paper, we propose to use this property for multi-step domain adaptation. In unsupervised domain adaptation settings, we demonstrate that replacing the source domain with adversarial examples to $\mathcal{H} \Delta \mathcal{H}$-divergence can improve source classifier accuracy on the target domain. Our method can be connected to most domain adaptation techniques. We conducted a range of experiments and achieved improvement in accuracy on Digits and Office-Home datasets.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
308,739
2105.03799
Human Gait State Prediction Using Cellular Automata and Classification Using ELM
In this research article, we have reported periodic cellular automata rules for different gait state prediction and classification of the gait data using extreme machine Leaning (ELM). This research is the first attempt to use cellular automaton to understand the complexity of bipedal walk. Due to nonlinearity, varying configurations throughout the gait cycle and the passive joint located at the unilateral foot-ground contact in bipedal walk resulting variation of dynamic descriptions and control laws from phase to phase for human gait is making difficult to predict the bipedal walk states. We have designed the cellular automata rules which will predict the next gait state of bipedal steps based on the previous two neighbour states. We have designed cellular automata rules for normal walk. The state prediction will help to correctly design the bipedal walk. The normal walk depends on next two states and has total 8 states. We have considered the current and previous states to predict next state. So we have formulated 16 rules using cellular automata, 8 rules for each leg. The priority order maintained using the fact that if right leg in swing phase then left leg will be in stance phase. To validate the model we have classified the gait Data using ELM [1] and achieved accuracy 60%. We have explored the trajectories and compares with another gait trajectories. Finally we have presented the error analysis for different joints.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
234,266
2406.18999
Improving Taxonomic Image-based Out-of-distribution Detection With DNA Barcodes
Image-based species identification could help scaling biodiversity monitoring to a global scale. Many challenges still need to be solved in order to implement these systems in real-world applications. A reliable image-based monitoring system must detect out-of-distribution (OOD) classes it has not been presented before. This is challenging especially with fine-grained classes. Emerging environmental monitoring techniques, DNA metabarcoding and eDNA, can help by providing information on OOD classes that are present in a sample. In this paper, we study if DNA barcodes can also support in finding the outlier images based on the outlier DNA sequence's similarity to the seen classes. We propose a re-ordering approach that can be easily applied on any pre-trained models and existing OOD detection methods. We experimentally show that the proposed approach improves taxonomic OOD detection compared to all common baselines. We also show that the method works thanks to a correlation between visual similarity and DNA barcode proximity. The code and data are available at https://github.com/mikkoim/dnaimg-ood.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
468,264
2105.08324
Learning-Based Robust Resource allocation for D2D Underlaying Cellular Network
In this paper, we study the resource allocation in D2D underlaying cellular network with uncertain channel state information (CSI). For satisfying the diversity requirements of different users, i.e. the minimum rate requirement for cellular user and the reliability requirement for D2D user, we attempt to maximize the cellular user's throughput whilst ensuring a chance constraint for D2D user. Then, a robust resource allocation framework is proposed for solving the highly intractable chance constraint about D2D reliability requirement, where the CSI uncertainties are represented as a deterministic set and the reliability requirement is enforced to hold for any uncertain CSI within it. Then, a symmetrical-geometry-based learning approach is developed to model the uncertain CSI into polytope, ellipsoidal and box. After that, we derive the robust counterpart of the chance constraint under these uncertainty sets as the computation convenient convex sets. To overcome the conservatism of the symmetrical-geometry-based uncertainty sets, we develop a support vector clustering (SVC)-based approach to model uncertain CSI as a compact convex uncertainty set. Based on that, the chance constraint of D2D is converted into a linear convex set. Then, we develop a bisection search-based power allocation algorithm for solving the resource allocation in D2D underlaying cellular network with different robust counterparts. Finally, we conduct the simulation to compare the proposed robust optimization approaches with the non-robust one.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
235,730
1503.00900
Normalization based K means Clustering Algorithm
K-means is an effective clustering technique used to separate similar data into groups based on initial centroids of clusters. In this paper, Normalization based K-means clustering algorithm(N-K means) is proposed. Proposed N-K means clustering algorithm applies normalization prior to clustering on the available data as well as the proposed approach calculates initial centroids based on weights. Experimental results prove the betterment of proposed N-K means clustering algorithm over existing K-means clustering algorithm in terms of complexity and overall performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
40,766
2301.09685
Noisy Parallel Data Alignment
An ongoing challenge in current natural language processing is how its major advancements tend to disproportionately favor resource-rich languages, leaving a significant number of under-resourced languages behind. Due to the lack of resources required to train and evaluate models, most modern language technologies are either nonexistent or unreliable to process endangered, local, and non-standardized languages. Optical character recognition (OCR) is often used to convert endangered language documents into machine-readable data. However, such OCR output is typically noisy, and most word alignment models are not built to work under such noisy conditions. In this work, we study the existing word-level alignment models under noisy settings and aim to make them more robust to noisy data. Our noise simulation and structural biasing method, tested on multiple language pairs, manages to reduce the alignment error rate on a state-of-the-art neural-based alignment model up to 59.6%.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
341,568
1405.6757
Proximal Reinforcement Learning: A New Theory of Sequential Decision Making in Primal-Dual Spaces
In this paper, we set forth a new vision of reinforcement learning developed by us over the past few years, one that yields mathematically rigorous solutions to longstanding important questions that have remained unresolved: (i) how to design reliable, convergent, and robust reinforcement learning algorithms (ii) how to guarantee that reinforcement learning satisfies pre-specified "safety" guarantees, and remains in a stable region of the parameter space (iii) how to design "off-policy" temporal difference learning algorithms in a reliable and stable manner, and finally (iv) how to integrate the study of reinforcement learning into the rich theory of stochastic optimization. In this paper, we provide detailed answers to all these questions using the powerful framework of proximal operators. The key idea that emerges is the use of primal dual spaces connected through the use of a Legendre transform. This allows temporal difference updates to occur in dual spaces, allowing a variety of important technical advantages. The Legendre transform elegantly generalizes past algorithms for solving reinforcement learning problems, such as natural gradient methods, which we show relate closely to the previously unconnected framework of mirror descent methods. Equally importantly, proximal operator theory enables the systematic development of operator splitting methods that show how to safely and reliably decompose complex products of gradients that occur in recent variants of gradient-based temporal difference learning. This key technical innovation makes it possible to finally design "true" stochastic gradient methods for reinforcement learning. Finally, Legendre transforms enable a variety of other benefits, including modeling sparsity and domain geometry. Our work builds extensively on recent work on the convergence of saddle-point algorithms, and on the theory of monotone operators.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
33,410
2405.13992
Learning Cut Generating Functions for Integer Programming
The branch-and-cut algorithm is the method of choice to solve large scale integer programming problems in practice. A key ingredient of branch-and-cut is the use of cutting planes which are derived constraints that reduce the search space for an optimal solution. Selecting effective cutting planes to produce small branch-and-cut trees is a critical challenge in the branch-and-cut algorithm. Recent advances have employed a data-driven approach to select optimal cutting planes from a parameterized family, aimed at reducing the branch-and-bound tree size (in expectation) for a given distribution of integer programming instances. We extend this idea to the selection of the best cut generating function (CGF), which is a tool in the integer programming literature for generating a wide variety of cutting planes that generalize the well-known Gomory Mixed-Integer (GMI) cutting planes. We provide rigorous sample complexity bounds for the selection of an effective CGF from certain parameterized families that provably performs well for any specified distribution on the problem instances. Our empirical results show that the selected CGF can outperform the GMI cuts for certain distributions. Additionally, we explore the sample complexity of using neural networks for instance-dependent CGF selection.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
456,184
2207.11335
Generalizing Homophily to Simplicial Complexes
Group interactions occur frequently in social settings, yet their properties beyond pairwise relationships in network models remain unexplored. In this work, we study homophily, the nearly ubiquitous phenomena wherein similar individuals are more likely than random to form connections with one another, and define it on simplicial complexes, a generalization of network models that goes beyond dyadic interactions. While some group homophily definitions have been proposed in the literature, we provide theoretical and empirical evidence that prior definitions mostly inherit properties of homophily in pairwise interactions rather than capture the homophily of group dynamics. Hence, we propose a new measure, $k$-simplicial homophily, which properly identifies homophily in group dynamics. Across 16 empirical networks, $k$-simplicial homophily provides information uncorrelated with homophily measures on pairwise interactions. Moreover, we show the empirical value of $k$-simplicial homophily in identifying when metadata on nodes is useful for predicting group interactions, whereas previous measures are uninformative.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
309,593
2103.02269
Lex2vec: making Explainable Word Embeddings via Lexical Resources
In this technical report, we propose an algorithm, called Lex2vec that exploits lexical resources to inject information into word embeddings and name the embedding dimensions by means of knowledge bases. We evaluate the optimal parameters to extract a number of informative labels that is readable and has a good coverage for the embedding dimensions.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
222,909
2401.06568
Lost in the Source Language: How Large Language Models Evaluate the Quality of Machine Translation
This study investigates how Large Language Models (LLMs) leverage source and reference data in machine translation evaluation task, aiming to better understand the mechanisms behind their remarkable performance in this task. We design the controlled experiments across various input modes and model types, and employ both coarse-grained and fine-grained prompts to discern the utility of source versus reference information. We find that reference information significantly enhances the evaluation accuracy, while surprisingly, source information sometimes is counterproductive, indicating LLMs' inability to fully leverage the cross-lingual capability when evaluating translations. Further analysis of the fine-grained evaluation and fine-tuning experiments show similar results. These findings also suggest a potential research direction for LLMs that fully exploits the cross-lingual capability of LLMs to achieve better performance in machine translation evaluation tasks.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
421,205
2208.01230
A Multifaceted Benchmarking of Synthetic Electronic Health Record Generation Models
Synthetic health data have the potential to mitigate privacy concerns when sharing data to support biomedical research and the development of innovative healthcare applications. Modern approaches for data generation based on machine learning, generative adversarial networks (GAN) methods in particular, continue to evolve and demonstrate remarkable potential. Yet there is a lack of a systematic assessment framework to benchmark methods as they emerge and determine which methods are most appropriate for which use cases. In this work, we introduce a generalizable benchmarking framework to appraise key characteristics of synthetic health data with respect to utility and privacy metrics. We apply the framework to evaluate synthetic data generation methods for electronic health records (EHRs) data from two large academic medical centers with respect to several use cases. The results illustrate that there is a utility-privacy tradeoff for sharing synthetic EHR data. The results further indicate that no method is unequivocally the best on all criteria in each use case, which makes it evident why synthetic data generation methods need to be assessed in context.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
false
311,110
2012.08548
Equalization Loss v2: A New Gradient Balance Approach for Long-tailed Object Detection
Recently proposed decoupled training methods emerge as a dominant paradigm for long-tailed object detection. But they require an extra fine-tuning stage, and the disjointed optimization of representation and classifier might lead to suboptimal results. However, end-to-end training methods, like equalization loss (EQL), still perform worse than decoupled training methods. In this paper, we reveal the main issue in long-tailed object detection is the imbalanced gradients between positives and negatives, and find that EQL does not solve it well. To address the problem of imbalanced gradients, we introduce a new version of equalization loss, called equalization loss v2 (EQL v2), a novel gradient guided reweighing mechanism that re-balances the training process for each category independently and equally. Extensive experiments are performed on the challenging LVIS benchmark. EQL v2 outperforms origin EQL by about 4 points overall AP with 14-18 points improvements on the rare categories. More importantly, it also surpasses decoupled training methods. Without further tuning for the Open Images dataset, EQL v2 improves EQL by 7.3 points AP, showing strong generalization ability. Codes have been released at https://github.com/tztztztztz/eqlv2
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
211,789
1901.05704
Evolving embodied intelligence from materials to machines
Natural lifeforms specialise to their environmental niches across many levels; from low-level features such as DNA and proteins, through to higher-level artefacts including eyes, limbs, and overarching body plans. We propose Multi-Level Evolution (MLE), a bottom-up automatic process that designs robots across multiple levels and niches them to tasks and environmental conditions. MLE concurrently explores constituent molecular and material 'building blocks', as well as their possible assemblies into specialised morphological and sensorimotor configurations. MLE provides a route to fully harness a recent explosion in available candidate materials and ongoing advances in rapid manufacturing processes. We outline a feasible MLE architecture that realises this vision, highlight the main roadblocks and how they may be overcome, and show robotic applications to which MLE is particularly suited. By forming a research agenda to stimulate discussion between researchers in related fields, we hope to inspire the pursuit of multi-level robotic design all the way from material to machine.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
true
false
false
118,839
2410.24012
Diffusion Twigs with Loop Guidance for Conditional Graph Generation
We introduce a novel score-based diffusion framework named Twigs that incorporates multiple co-evolving flows for enriching conditional generation tasks. Specifically, a central or trunk diffusion process is associated with a primary variable (e.g., graph structure), and additional offshoot or stem processes are dedicated to dependent variables (e.g., graph properties or labels). A new strategy, which we call loop guidance, effectively orchestrates the flow of information between the trunk and the stem processes during sampling. This approach allows us to uncover intricate interactions and dependencies, and unlock new generative capabilities. We provide extensive experiments to demonstrate strong performance gains of the proposed method over contemporary baselines in the context of conditional graph generation, underscoring the potential of Twigs in challenging generative tasks such as inverse molecular design and molecular optimization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
504,287
2010.00488
Developing Effective Community Network Analysis Tools According to Visualization Psychology
Visualization is a useful technology in health science, and especially for community network analysis. Because visualization applications in healthcare are typically risk-averse, health psychologists can play a significant role in ensuring appropriate and effective uses of visualization techniques in healthcare. In this paper, we examine the role of health psychologists in the triangle of "health science", "visualization technology", and "visualization psychology". We conclude that health psychologists can use visualization to aid data intelligence workflows in healthcare and health psychology, while researching into visualization psychology to aid the improvement and optimization of data visualization processes.
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
198,301
2110.14766
Fine Grained Human Evaluation for English-to-Chinese Machine Translation: A Case Study on Scientific Text
Recent research suggests that neural machine translation (MT) in the news domain has reached human-level performance, but for other professional domains, it is far below the level. In this paper, we conduct a fine-grained systematic human evaluation for four widely used Chinese-English NMT systems on scientific abstracts which are collected from published journals and books. Our human evaluation results show that all the systems return with more than 10\% error rates on average, which requires much post editing effort for real academic use. Furthermore, we categorize six main error types and and provide some real examples. Our findings emphasise the needs that research attention in the MT community should be shifted from short text generic translation to professional machine translation and build large scale bilingual corpus for these specific domains.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
263,626
2409.01069
A blueprint for large-scale quantum-network deployments
Quantum Communications is a field that promises advances in cryptography, quantum computing and clock synchronisation, among other potential applications. However, communication based on quantum phenomena requires an extreme level of isolation from external disturbances, making the transmission of quantum signals together with classical ones difficult. A range of techniques has been tested to introduce quantum communications in already deployed optical networks which also carry legacy traffic. This comes with challenges, not only at the physical layer but also at the operations and management layer. To achieve a broad acceptance among network operators, the joint management and operation of quantum and classical resources, compliance with standards, and quality and legal assurance need to be addressed. This article presents a detailed account of solutions to the above issues, deployed and evaluated in the MadQCI (Madrid Quantum Communication Infrastructure) testbed. This network is designed to integrate quantum communications in the telecommunications ecosystem by installing quantum-key-distribution modules from multiple providers in production nodes of two different operators. The modules were connected through an optical-switched network with more than 130 km of deployed optical fibre. The tests were done in compliance with strict service level agreements that protected the legacy traffic of the pre-existing classical network. The goal was to achieve full quantum-classical compatibility at all levels, while limiting the modifications of optical transport and encryption and complying with as many standards as possible. This effort was intended to serve as a blueprint, which can be used as the foundation of large-scale quantum network deployments. To demonstrate the capabilities of MadQCI, end-to-end encryption services were deployed and a variety of use-cases were showcased.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
485,194
2409.08732
Bridging Dynamic Factor Models and Neural Controlled Differential Equations for Nowcasting GDP
Gross domestic product (GDP) nowcasting is crucial for policy-making as GDP growth is a key indicator of economic conditions. Dynamic factor models (DFMs) have been widely adopted by government agencies for GDP nowcasting due to their ability to handle irregular or missing macroeconomic indicators and their interpretability. However, DFMs face two main challenges: i) the lack of capturing economic uncertainties such as sudden recessions or booms, and ii) the limitation of capturing irregular dynamics from mixed-frequency data. To address these challenges, we introduce NCDENow, a novel GDP nowcasting framework that integrates neural controlled differential equations (NCDEs) with DFMs. This integration effectively handles the dynamics of irregular time series. NCDENow consists of 3 main modules: i) factor extraction leveraging DFM, ii) dynamic modeling using NCDE, and iii) GDP growth prediction through regression. We evaluate NCDENow against 6 baselines on 2 real-world GDP datasets from South Korea and the United Kingdom, demonstrating its enhanced predictive capability. Our empirical results favor our method, highlighting the significant potential of integrating NCDE into nowcasting models. Our code and dataset are available at https://github.com/sklim84/NCDENow_CIKM2024.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
488,037
1911.13268
Adversarially Robust Low Dimensional Representations
Many machine learning systems are vulnerable to small perturbations made to inputs either at test time or at training time. This has received much recent interest on the empirical front due to applications where reliability and security are critical. However, theoretical understanding of algorithms that are robust to adversarial perturbations is limited. In this work we focus on Principal Component Analysis (PCA), a ubiquitous algorithmic primitive in machine learning. We formulate a natural robust variant of PCA where the goal is to find a low dimensional subspace to represent the given data with minimum projection error, that is in addition robust to small perturbations measured in $\ell_q$ norm (say $q=\infty$). Unlike PCA which is solvable in polynomial time, our formulation is computationally intractable to optimize as it captures a variant of the well-studied sparse PCA objective as a special case. We show the following results: -Polynomial time algorithm that is constant factor competitive in the worst-case with respect to the best subspace, in terms of the projection error and the robustness criterion. -We show that our algorithmic techniques can also be made robust to adversarial training-time perturbations, in addition to yielding representations that are robust to adversarial perturbations at test time. Specifically, we design algorithms for a strong notion of training-time perturbations, where every point is adversarially perturbed up to a specified amount. -We illustrate the broad applicability of our algorithmic techniques in addressing robustness to adversarial perturbations, both at training time and test time. In particular, our adversarially robust PCA primitive leads to computationally efficient and robust algorithms for both unsupervised and supervised learning problems such as clustering and learning adversarially robust classifiers.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
155,633
2305.10311
Investigating image-based fallow weed detection performance on Raphanus sativus and Avena sativa at speeds up to 30 km h$^{-1}$
Site-specific weed control (SSWC) can provide considerable reductions in weed control costs and herbicide usage. Despite the promise of machine vision for SSWC systems and the importance of ground speed in weed control efficacy, there has been little investigation of the role of ground speed and camera characteristics on weed detection performance. Here, we compare the performance of four camera-software combinations using the open-source OpenWeedLocator platform - (1) default settings on a Raspberry Pi HQ camera, (2) optimised software settings on a HQ camera, (3) optimised software settings on the Raspberry Pi v2 camera, and (4) a global shutter Arducam AR0234 camera - at speeds ranging from 5 km h$^{-1}$ to 30 km h$^{-1}$. A combined excess green (ExG) and hue, saturation, value (HSV) thresholding algorithm was used for testing under fallow conditions using tillage radish (Raphanus sativus) and forage oats (Avena sativa) as representative broadleaf and grass weeds, respectively. ARD demonstrated the highest recall among camera systems, with up to 95.7% of weeds detected at 5 km h$^{-1}$ and 85.7% at 30 km h$^{-1}$. HQ1 and V2 cameras had the lowest recall of 31.1% and 26.0% at 30 km h$^{-1}$, respectively. All cameras experienced a decrease in recall as speed increased. The highest rate of decrease was observed for HQ1 with 1.12% and 0.90% reductions in recall for every km h$^{-1}$ increase in speed for tillage radish and forage oats, respectively. Detection of the grassy forage oats was worse (P<0.05) than the broadleaved tillage radish for all cameras. Despite the variations in recall, HQ1, HQ2, and V2 maintained near-perfect precision at all tested speeds. The variable effect of ground speed and camera system on detection performance of grass and broadleaf weeds, indicates that careful hardware and software considerations must be made when developing SSWC systems.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
364,998
2502.11672
Exact Upper and Lower Bounds for the Output Distribution of Neural Networks with Random Inputs
We derive exact upper and lower bounds for the cumulative distribution function (cdf) of the output of a neural network over its entire support subject to noisy (stochastic) inputs. The upper and lower bounds converge to the true cdf over its domain as the resolution increases. Our method applies to any feedforward NN using continuous monotonic piecewise differentiable activation functions (e.g., ReLU, tanh and softmax) and convolutional NNs, which were beyond the scope of competing approaches. The novelty and an instrumental tool of our approach is to bound general NNs with ReLU NNs. The ReLU NN based bounds are then used to derive upper and lower bounds of the cdf of the NN output. Experiments demonstrate that our method delivers guaranteed bounds of the predictive output distribution over its support, thus providing exact error guarantees, in contrast to competing approaches.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
534,491
2403.10731
Giving a Hand to Diffusion Models: a Two-Stage Approach to Improving Conditional Human Image Generation
Recent years have seen significant progress in human image generation, particularly with the advancements in diffusion models. However, existing diffusion methods encounter challenges when producing consistent hand anatomy and the generated images often lack precise control over the hand pose. To address this limitation, we introduce a novel approach to pose-conditioned human image generation, dividing the process into two stages: hand generation and subsequent body outpainting around the hands. We propose training the hand generator in a multi-task setting to produce both hand images and their corresponding segmentation masks, and employ the trained model in the first stage of generation. An adapted ControlNet model is then used in the second stage to outpaint the body around the generated hands, producing the final result. A novel blending technique is introduced to preserve the hand details during the second stage that combines the results of both stages in a coherent way. This involves sequential expansion of the outpainted region while fusing the latent representations, to ensure a seamless and cohesive synthesis of the final image. Experimental evaluations demonstrate the superiority of our proposed method over state-of-the-art techniques, in both pose accuracy and image quality, as validated on the HaGRID dataset. Our approach not only enhances the quality of the generated hands but also offers improved control over hand pose, advancing the capabilities of pose-conditioned human image generation. The source code of the proposed approach is available at https://github.com/apelykh/hand-to-diffusion.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
438,322
2303.00246
ISBNet: a 3D Point Cloud Instance Segmentation Network with Instance-aware Sampling and Box-aware Dynamic Convolution
Existing 3D instance segmentation methods are predominated by the bottom-up design -- manually fine-tuned algorithm to group points into clusters followed by a refinement network. However, by relying on the quality of the clusters, these methods generate susceptible results when (1) nearby objects with the same semantic class are packed together, or (2) large objects with loosely connected regions. To address these limitations, we introduce ISBNet, a novel cluster-free method that represents instances as kernels and decodes instance masks via dynamic convolution. To efficiently generate high-recall and discriminative kernels, we propose a simple strategy named Instance-aware Farthest Point Sampling to sample candidates and leverage the local aggregation layer inspired by PointNet++ to encode candidate features. Moreover, we show that predicting and leveraging the 3D axis-aligned bounding boxes in the dynamic convolution further boosts performance. Our method set new state-of-the-art results on ScanNetV2 (55.9), S3DIS (60.8), and STPLS3D (49.2) in terms of AP and retains fast inference time (237ms per scene on ScanNetV2). The source code and trained models are available at https://github.com/VinAIResearch/ISBNet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
348,533
2205.10018
NMA: Neural Multi-slot Auctions with Externalities for Online Advertising
Online advertising driven by auctions brings billions of dollars in revenue for social networking services and e-commerce platforms. GSP auctions, which are simple and easy to understand for advertisers, have almost become the benchmark for ad auction mechanisms in the industry. However, most GSP-based industrial practices assume that the user click only relies on the ad itself, which overlook the effect of external items, referred to as externalities. Recently, DNA has attempted to upgrade GSP with deep neural networks and models local externalities to some extent. However, it only considers set-level contexts from auctions and ignores the order and displayed position of ads, which is still suboptimal. Although VCG-based multi-slot auctions (e.g., VCG, WVCG) make it theoretically possible to model global externalities (e.g., the order and positions of ads and so on), they lack an efficient balance of both revenue and social welfare. In this paper, we propose novel auction mechanisms named Neural Multi-slot Auctions (NMA) to tackle the above-mentioned challenges. Specifically, we model the global externalities effectively with a context-aware list-wise prediction module to achieve better performance. We design a list-wise deep rank module to guarantee incentive compatibility in end-to-end learning. Furthermore, we propose an auxiliary loss for social welfare to effectively reduce the decline of social welfare while maximizing revenue. Experiment results on both offline large-scale datasets and online A/B tests demonstrate that NMA obtains higher revenue with balanced social welfare than other existing auction mechanisms (i.e., GSP, DNA, WVCG) in industrial practice, and we have successfully deployed NMA on Meituan food delivery platform.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
297,517
1410.7091
On Some Distributed Disorder Detection
Multivariate data sources with components of different information value seem to appear frequently in practice. Models in which the components change their homogeneity at different times are of significant importance. The fact whether any changes are influential for the whole process is determined not only by the moments of the change, but also depends on which coordinates. This is particularly important in issues such as reliability analysis of complex systems and the location of an intruder in surveillance systems. In this paper we developed a mathematical model for such sources of signals with discrete time having the Markov property given the times of change. The research also comprises a multivariate detection of the transition probabilities changes at certain sensitivity level in the multidimensional process. Additionally, the observation of the random vector is depicted. Each chosen coordinate forms the Markov process with different transition probabilities before and after some unknown moment. The aim of statisticians is to estimate the moments based on the observation of the process. The Bayesian approach is used with the risk function depending on measure of chance of a false alarm and some cost of overestimation. The moment of the system's disorder is determined by the detection of transition probabilities changes at some coordinates. The overall modeling of the critical coordinates is based on the simple game.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
37,038
2501.03360
Quantum Feature-Empowered Deep Classification for Fast Mangrove Mapping
A mangrove mapping (MM) algorithm is an essential classification tool for environmental monitoring. The recent literature shows that compared with other index-based MM methods that treat pixels as spatially independent, convolutional neural networks (CNNs) are crucial for leveraging spatial continuity information, leading to improved classification performance. In this work, we go a step further to show that quantum features provide radically new information for CNN to further upgrade the classification results. Simply speaking, CNN computes affine-mapping features, while quantum neural network (QNN) offers unitary-computing features, thereby offering a fresh perspective in the final decision-making (classification). To address the challenging MM problem, we design an entangled spatial-spectral quantum feature extraction module. Notably, to ensure that the quantum features contribute genuinely novel information (unaffected by traditional CNN features), we design a separate network track consisting solely of quantum neurons with built-in interpretability. The extracted pure quantum information is then fused with traditional feature information to jointly make the final decision. The proposed quantum-empowered deep network (QEDNet) is very lightweight, so the improvement does come from the cooperation between CNN and QNN (rather than parameter augmentation). Extensive experiments will be conducted to demonstrate the superiority of QEDNet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
522,847
1711.10283
Data Backup Network Formation with Heterogeneous Agents
Social storage systems are becoming increasingly popular compared to the existing data backup systems like local, centralized and P2P systems. An endogenously built symmetric social storage model and its aspects like the utility of each agent, bilateral stability, contentment, and efficiency have been extensively discussed in Mane et. al. (2017). We include heterogeneity in this model by using the concept of Social Range Matrix from Kuznetsov et. al (2010). Now, each agent is concerned about its perceived utility, which is a linear combination of its utility as well as others utilities (depending upon whether the pair are friends, enemies or do not care about each other). We derive conditions when two agents may want to add or delete a link, and provide an algorithm that checks if a bilaterally stable network is possible or not. Finally, we take some special Social Range Matrices and prove that under certain conditions on network parameters, a bilaterally stable network is unique.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
85,560
2307.14502
The Effect of Spoken Language on Speech Enhancement using Self-Supervised Speech Representation Loss Functions
Recent work in the field of speech enhancement (SE) has involved the use of self-supervised speech representations (SSSRs) as feature transformations in loss functions. However, in prior work, very little attention has been paid to the relationship between the language of the audio used to train the self-supervised representation and that used to train the SE system. Enhancement models trained using a loss function which incorporates a self-supervised representation that shares exactly the language of the noisy data used to train the SE system show better performance than those which do not match exactly. This may lead to enhancement systems which are language specific and as such do not generalise well to unseen languages, unlike models trained using traditional spectrogram or time domain loss functions. In this work, SE models are trained and tested on a number of different languages, with self-supervised representations which themselves are trained using different language combinations and with differing network structures as loss function representations. These models are then tested across unseen languages and their performances are analysed. It is found that the training language of the self-supervised representation appears to have a minor effect on enhancement performance, the amount of training data of a particular language, however, greatly affects performance.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
381,949
2502.08640
Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs
As AIs rapidly advance and become more agentic, the risk they pose is governed not only by their capabilities but increasingly by their propensities, including goals and values. Tracking the emergence of goals and values has proven a longstanding problem, and despite much interest over the years it remains unclear whether current AIs have meaningful values. We propose a solution to this problem, leveraging the framework of utility functions to study the internal coherence of AI preferences. Surprisingly, we find that independently-sampled preferences in current LLMs exhibit high degrees of structural coherence, and moreover that this emerges with scale. These findings suggest that value systems emerge in LLMs in a meaningful sense, a finding with broad implications. To study these emergent value systems, we propose utility engineering as a research agenda, comprising both the analysis and control of AI utilities. We uncover problematic and often shocking values in LLM assistants despite existing control measures. These include cases where AIs value themselves over humans and are anti-aligned with specific individuals. To constrain these emergent value systems, we propose methods of utility control. As a case study, we show how aligning utilities with a citizen assembly reduces political biases and generalizes to new scenarios. Whether we like it or not, value systems have already emerged in AIs, and much work remains to fully understand and control these emergent representations.
false
false
false
false
true
false
true
false
true
false
false
true
false
true
false
false
false
false
533,094
1903.05614
Computing Approximate Equilibria in Sequential Adversarial Games by Exploitability Descent
In this paper, we present exploitability descent, a new algorithm to compute approximate equilibria in two-player zero-sum extensive-form games with imperfect information, by direct policy optimization against worst-case opponents. We prove that when following this optimization, the exploitability of a player's strategy converges asymptotically to zero, and hence when both players employ this optimization, the joint policies converge to a Nash equilibrium. Unlike fictitious play (XFP) and counterfactual regret minimization (CFR), our convergence result pertains to the policies being optimized rather than the average policies. Our experiments demonstrate convergence rates comparable to XFP and CFR in four benchmark games in the tabular case. Using function approximation, we find that our algorithm outperforms the tabular version in two of the games, which, to the best of our knowledge, is the first such result in imperfect information games among this class of algorithms.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
124,197
2106.07338
Neighborhood Rough Set based Multi-document Summarization
This research paper proposes a novel Neighbourhood Rough Set based approach for supervised Multi-document Text Summarization (MDTS) with analysis and impact on the summarization results for MDTS. Here, Rough Set based LERS algorithm is improved using Neighborhood Rough Set which is itself a novel combination called Neighborhood-LERS to be experimented for evaluations of efficacy and efficiency. In this paper, we shall apply and evaluate the proposed Neighborhood-LERS for Multi-document Summarization which here is proved experimentally to be superior to the base LERS technique for MDTS.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
240,879
2205.06266
Lifting the Curse of Multilinguality by Pre-training Modular Transformers
Multilingual pre-trained models are known to suffer from the curse of multilinguality, which causes per-language performance to drop as they cover more languages. We address this issue by introducing language-specific modules, which allows us to grow the total capacity of the model, while keeping the total number of trainable parameters per language constant. In contrast with prior work that learns language-specific components post-hoc, we pre-train the modules of our Cross-lingual Modular (X-Mod) models from the start. Our experiments on natural language inference, named entity recognition and question answering show that our approach not only mitigates the negative interference between languages, but also enables positive transfer, resulting in improved monolingual and cross-lingual performance. Furthermore, our approach enables adding languages post-hoc with no measurable drop in performance, no longer limiting the model usage to the set of pre-trained languages.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
296,192
2209.07670
Reducing Variance in Temporal-Difference Value Estimation via Ensemble of Deep Networks
In temporal-difference reinforcement learning algorithms, variance in value estimation can cause instability and overestimation of the maximal target value. Many algorithms have been proposed to reduce overestimation, including several recent ensemble methods, however none have shown success in sample-efficient learning through addressing estimation variance as the root cause of overestimation. In this paper, we propose MeanQ, a simple ensemble method that estimates target values as ensemble means. Despite its simplicity, MeanQ shows remarkable sample efficiency in experiments on the Atari Learning Environment benchmark. Importantly, we find that an ensemble of size 5 sufficiently reduces estimation variance to obviate the lagging target network, eliminating it as a source of bias and further gaining sample efficiency. We justify intuitively and empirically the design choices in MeanQ, including the necessity of independent experience sampling. On a set of 26 benchmark Atari environments, MeanQ outperforms all tested baselines, including the best available baseline, SUNRISE, at 100K interaction steps in 16/26 environments, and by 68% on average. MeanQ also outperforms Rainbow DQN at 500K steps in 21/26 environments, and by 49% on average, and achieves average human-level performance using 200K ($\pm$100K) interaction steps. Our implementation is available at https://github.com/indylab/MeanQ.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
317,839
2302.09424
Zero and Few-Shot Localization of Task-Oriented Dialogue Agents with a Distilled Representation
Task-oriented Dialogue (ToD) agents are mostly limited to a few widely-spoken languages, mainly due to the high cost of acquiring training data for each language. Existing low-cost approaches that rely on cross-lingual embeddings or naive machine translation sacrifice a lot of accuracy for data efficiency, and largely fail in creating a usable dialogue agent. We propose automatic methods that use ToD training data in a source language to build a high-quality functioning dialogue agent in another target language that has no training data (i.e. zero-shot) or a small training set (i.e. few-shot). Unlike most prior work in cross-lingual ToD that only focuses on Dialogue State Tracking (DST), we build an end-to-end agent. We show that our approach closes the accuracy gap between few-shot and existing full-shot methods for ToD agents. We achieve this by (1) improving the dialogue data representation, (2) improving entity-aware machine translation, and (3) automatic filtering of noisy translations. We evaluate our approach on the recent bilingual dialogue dataset BiToD. In Chinese to English transfer, in the zero-shot setting, our method achieves 46.7% and 22.0% in Task Success Rate (TSR) and Dialogue Success Rate (DSR) respectively. In the few-shot setting where 10% of the data in the target language is used, we improve the state-of-the-art by 15.2% and 14.0%, coming within 5% of full-shot training.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
346,419
2007.06605
Privacy Amplification via Random Check-Ins
Differentially Private Stochastic Gradient Descent (DP-SGD) forms a fundamental building block in many applications for learning over sensitive data. Two standard approaches, privacy amplification by subsampling, and privacy amplification by shuffling, permit adding lower noise in DP-SGD than via na\"{\i}ve schemes. A key assumption in both these approaches is that the elements in the data set can be uniformly sampled, or be uniformly permuted -- constraints that may become prohibitive when the data is processed in a decentralized or distributed fashion. In this paper, we focus on conducting iterative methods like DP-SGD in the setting of federated learning (FL) wherein the data is distributed among many devices (clients). Our main contribution is the \emph{random check-in} distributed protocol, which crucially relies only on randomized participation decisions made locally and independently by each client. It has privacy/accuracy trade-offs similar to privacy amplification by subsampling/shuffling. However, our method does not require server-initiated communication, or even knowledge of the population size. To our knowledge, this is the first privacy amplification tailored for a distributed learning framework, and it may have broader applicability beyond FL. Along the way, we extend privacy amplification by shuffling to incorporate $(\epsilon,\delta)$-DP local randomizers, and exponentially improve its guarantees. In practical regimes, this improvement allows for similar privacy and utility using data from an order of magnitude fewer users.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
187,055
2106.00728
Evaluating Recipes Generated from Functional Object-Oriented Network
The functional object-oriented network (FOON) has been introduced as a knowledge representation, which takes the form of a graph, for symbolic task planning. To get a sequential plan for a manipulation task, a robot can obtain a task tree through a knowledge retrieval process from the FOON. To evaluate the quality of an acquired task tree, we compare it with a conventional form of task knowledge, such as recipes or manuals. We first automatically convert task trees to recipes, and we then compare them with the human-created recipes in the Recipe1M+ dataset via a survey. Our preliminary study finds no significant difference between the recipes in Recipe1M+ and the recipes generated from FOON task trees in terms of correctness, completeness, and clarity.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
238,224
2110.06354
Tell Me How to Survey: Literature Review Made Simple with Automatic Reading Path Generation
Recent years have witnessed the dramatic growth of paper volumes with plenty of new research papers published every day, especially in the area of computer science. How to glean papers worth reading from the massive literature to do a quick survey or keep up with the latest advancement about a specific research topic has become a challenging task. Existing academic search engines such as Google Scholar return relevant papers by individually calculating the relevance between each paper and query. However, such systems usually omit the prerequisite chains of a research topic and cannot form a meaningful reading path. In this paper, we introduce a new task named Reading Path Generation (RPG) which aims at automatically producing a path of papers to read for a given query. To serve as a research benchmark, we further propose SurveyBank, a dataset consisting of large quantities of survey papers in the field of computer science as well as their citation relationships. Each survey paper contains key phrases extracted from its title and multi-level reading lists inferred from its references. Furthermore, we propose a graph-optimization-based approach for reading path generation which takes the relationship between papers into account. Extensive evaluations demonstrate that our approach outperforms other baselines. A Real-time Reading Path Generation System (RePaGer) has been also implemented with our designed model. To the best of our knowledge, we are the first to target this important research problem. Our source code of RePaGer system and SurveyBank dataset can be found on here.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
260,576
2012.01653
Deep Spectral CNN for Laser Induced Breakdown Spectroscopy
This work proposes a spectral convolutional neural network (CNN) operating on laser induced breakdown spectroscopy (LIBS) signals to learn to (1) disentangle spectral signals from the sources of sensor uncertainty (i.e., pre-process) and (2) get qualitative and quantitative measures of chemical content of a sample given a spectral signal (i.e., calibrate). Once the spectral CNN is trained, it can accomplish either task through a single feed-forward pass, with real-time benefits and without any additional side information requirements including dark current, system response, temperature and detector-to-target range. Our experiments demonstrate that the proposed method outperforms the existing approaches used by the Mars Science Lab for pre-processing and calibration for remote sensing observations from the Mars rover, 'Curiosity'.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
209,480
2406.08099
Confidence Interval Estimation of Predictive Performance in the Context of AutoML
Any supervised machine learning analysis is required to provide an estimate of the out-of-sample predictive performance. However, it is imperative to also provide a quantification of the uncertainty of this performance in the form of a confidence or credible interval (CI) and not just a point estimate. In an AutoML setting, estimating the CI is challenging due to the ``winner's curse", i.e., the bias of estimation due to cross-validating several machine learning pipelines and selecting the winning one. In this work, we perform a comparative evaluation of 9 state-of-the-art methods and variants in CI estimation in an AutoML setting on a corpus of real and simulated datasets. The methods are compared in terms of inclusion percentage (does a 95\% CI include the true performance at least 95\% of the time), CI tightness (tighter CIs are preferable as being more informative), and execution time. The evaluation is the first one that covers most, if not all, such methods and extends previous work to imbalanced and small-sample tasks. In addition, we present a variant, called BBC-F, of an existing method (the Bootstrap Bias Correction, or BBC) that maintains the statistical properties of the BBC but is more computationally efficient. The results support that BBC-F and BBC dominate the other methods in all metrics measured.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
463,352
2202.03987
Data Consistency for Weakly Supervised Learning
In many applications, training machine learning models involves using large amounts of human-annotated data. Obtaining precise labels for the data is expensive. Instead, training with weak supervision provides a low-cost alternative. We propose a novel weak supervision algorithm that processes noisy labels, i.e., weak signals, while also considering features of the training data to produce accurate labels for training. Our method searches over classifiers of the data representation to find plausible labelings. We call this paradigm data consistent weak supervision. A key facet of our framework is that we are able to estimate labels for data examples low or no coverage from the weak supervision. In addition, we make no assumptions about the joint distribution of the weak signals and true labels of the data. Instead, we use weak signals and the data features to solve a constrained optimization that enforces data consistency among the labels we generate. Empirical evaluation of our method on different datasets shows that it significantly outperforms state-of-the-art weak supervision methods on both text and image classification tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
279,419
2404.13599
"A good pun is its own reword": Can Large Language Models Understand Puns?
Puns play a vital role in academic research due to their distinct structure and clear definition, which aid in the comprehensive analysis of linguistic humor. However, the understanding of puns in large language models (LLMs) has not been thoroughly examined, limiting their use in creative writing and humor creation. In this paper, we leverage three popular tasks, i.e., pun recognition, explanation and generation to systematically evaluate the capabilities of LLMs in pun understanding. In addition to adopting the automated evaluation metrics from prior research, we introduce new evaluation methods and metrics that are better suited to the in-context learning paradigm of LLMs. These new metrics offer a more rigorous assessment of an LLM's ability to understand puns and align more closely with human cognition than previous metrics. Our findings reveal the "lazy pun generation" pattern and identify the primary challenges LLMs encounter in understanding puns.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
448,360
2408.13357
SEQ+MD: Learning Multi-Task as a SEQuence with Multi-Distribution Data
In e-commerce, the order in which search results are displayed when a customer tries to find relevant listings can significantly impact their shopping experience and search efficiency. Tailored re-ranking system based on relevance and engagement signals in E-commerce has often shown improvement on sales and gross merchandise value (GMV). Designing algorithms for this purpose is even more challenging when the shops are not restricted to domestic buyers, but can sale globally to international buyers. Our solution needs to incorporate shopping preference and cultural traditions in different buyer markets. We propose the SEQ+MD framework, which integrates sequential learning for multi-task learning (MTL) and feature-generated region-mask for multi-distribution input. This approach leverages the sequential order within tasks and accounts for regional heterogeneity, enhancing performance on multi-source data. Evaluations on in-house data showed a strong increase on the high-value engagement including add-to-cart and purchase while keeping click performance neutral compared to state-of-the-art baseline models. Additionally, our multi-regional learning module is "plug-and-play" and can be easily adapted to enhance other MTL applications.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
483,112
1911.09923
SWift -- A SignWriting editor to bridge between deaf world and e-learning
SWift (SignWriting improved fast transcriber) is an advanced editor for SignWriting (SW). At present, SW is a promising alternative to provide documents in an easy-to-grasp written form of (any) Sign Language, the gestural way of communication which is widely adopted by the deaf community. SWift was developed SW users, either deaf or not, to support collaboration and exchange of ideas. The application allows composing and saving desired signs using elementary components, called glyphs. The procedure that was devised guides and simplifies the editing process. SWift aims at breaking the "electronic" barriers that keep the deaf community away from ICT in general, and from e-learning in particular. The editor can be contained in a pluggable module; therefore, it can be integrated everywhere the use of SW is an advisable alternative to written "verbal" language, which often hinders information grasping by deaf users.
true
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
154,672
1905.00768
Threshold-based Selective Cooperative-NOMA
In this letter, we propose threshold-based selective cooperative-NOMA (TBS-C-NOMA) to increase the data reliability of conventional cooperative-NOMA (C-NOMA) networks. In TBS-C-NOMA, the intra-cell user forwards the symbols of cell-edge user after successive interference canceler (SIC) only if the signal-to-interference plus noise ratio (SINR) is greater than the pre-determined threshold value. Hence, the data reliability of the cell-edge user is increased by eliminating the effect of the error propagation. We derive closed-form end-to-end exact bit error probability (BEP) of proposed system for various modulation constellations. Then, the optimum threshold value is analyzed in order to minimize BEP. The obtained expressions are validated via simulations and it is revealed that TBS-C-NOMA outperforms C-NOMA and full diversity order is achieved.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
129,557
2102.04051
HumanACGAN: conditional generative adversarial network with human-based auxiliary classifier and its evaluation in phoneme perception
We propose a conditional generative adversarial network (GAN) incorporating humans' perceptual evaluations. A deep neural network (DNN)-based generator of a GAN can represent a real-data distribution accurately but can never represent a human-acceptable distribution, which are ranges of data in which humans accept the naturalness regardless of whether the data are real or not. A HumanGAN was proposed to model the human-acceptable distribution. A DNN-based generator is trained using a human-based discriminator, i.e., humans' perceptual evaluations, instead of the GAN's DNN-based discriminator. However, the HumanGAN cannot represent conditional distributions. This paper proposes the HumanACGAN, a theoretical extension of the HumanGAN, to deal with conditional human-acceptable distributions. Our HumanACGAN trains a DNN-based conditional generator by regarding humans as not only a discriminator but also an auxiliary classifier. The generator is trained by deceiving the human-based discriminator that scores the unconditioned naturalness and the human-based classifier that scores the class-conditioned perceptual acceptability. The training can be executed using the backpropagation algorithm involving humans' perceptual evaluations. Our experimental results in phoneme perception demonstrate that our HumanACGAN can successfully train this conditional generator.
true
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
218,977
2007.07475
Lifting Constructions of PDAs for Coded Caching with Linear Subpacketization
Coded caching is a technique where multicasting and coding opportunities are utilized to achieve better rate-memory tradeoff in cached networks. A crucial parameter in coded caching is subpacketization, which is the number of parts a file is to be split into for coding purposes. The original Maddah-Ali-Niesen scheme has order-optimal rate at a subpacketization growing exponentially with the number of users. In contrast, placement and delivery schemes in coded caching, designed using placement delivery arrays (PDAs), can have linear subpacketization with a penalty in rate. In this work, we propose several constructions of efficient PDAs through lifting, where a base PDA is expanded by replacing each entry by another PDA. By proposing and using the notion of Blackburn-compatibility of PDAs, we provide multiple lifting constructions with increasing coding gains. We compare the constructed coded caching schemes with other existing schemes for moderately high number of users and show that the proposed constructions are versatile and achieve a good rate-memory tradeoff at low subpacketizations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
187,345
2405.01527
Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation
We seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation: interacting with unseen objects in novel scenes without test-time adaptation. While typical approaches rely on a large amount of demonstration data for such generalization, we propose an approach that leverages web videos to predict plausible interaction plans and learns a task-agnostic transformation to obtain robot actions in the real world. Our framework,Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal, and can be trained with diverse videos on the web including those of humans and robots manipulating everyday objects. We use these 2D track predictions to infer a sequence of rigid transforms of the object to be manipulated, and obtain robot end-effector poses that can be executed in an open-loop manner. We then refine this open-loop plan by predicting residual actions through a closed loop policy trained with a few embodiment-specific demonstrations. We show that this approach of combining scalably learned track prediction with a residual policy requiring minimal in-domain robot-specific data enables diverse generalizable robot manipulation, and present a wide array of real-world robot manipulation results across unseen tasks, objects, and scenes. https://homangab.github.io/track2act/
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
451,386
1605.03626
Predictability and hierarchy in Drosophila behavior
Even the simplest of animals exhibit behavioral sequences with complex temporal dynamics. Prominent amongst the proposed organizing principles for these dynamics has been the idea of a hierarchy, wherein the movements an animal makes can be understood as a set of nested sub-clusters. Although this type of organization holds potential advantages in terms of motion control and neural circuitry, measurements demonstrating this for an animal's entire behavioral repertoire have been limited in scope and temporal complexity. Here, we use a recently developed unsupervised technique to discover and track the occurrence of all stereotyped behaviors performed by fruit flies moving in a shallow arena. Calculating the optimally predictive representation of the fly's future behaviors, we show that fly behavior exhibits multiple time scales and is organized into a hierarchical structure that is indicative of its underlying behavioral programs and its changing internal states.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
55,763
2205.06440
Exploiting Variational Domain-Invariant User Embedding for Partially Overlapped Cross Domain Recommendation
Cross-Domain Recommendation (CDR) has been popularly studied to utilize different domain knowledge to solve the cold-start problem in recommender systems. Most of the existing CDR models assume that both the source and target domains share the same overlapped user set for knowledge transfer. However, only few proportion of users simultaneously activate on both the source and target domains in practical CDR tasks. In this paper, we focus on the Partially Overlapped Cross-Domain Recommendation (POCDR) problem, that is, how to leverage the information of both the overlapped and non-overlapped users to improve recommendation performance. Existing approaches cannot fully utilize the useful knowledge behind the non-overlapped users across domains, which limits the model performance when the majority of users turn out to be non-overlapped. To address this issue, we propose an end-to-end dual-autoencoder with Variational Domain-invariant Embedding Alignment (VDEA) model, a cross-domain recommendation framework for the POCDR problem, which utilizes dual variational autoencoders with both local and global embedding alignment for exploiting domain-invariant user embedding. VDEA first adopts variational inference to capture collaborative user preferences, and then utilizes Gromov-Wasserstein distribution co-clustering optimal transport to cluster the users with similar rating interaction behaviors. Our empirical studies on Douban and Amazon datasets demonstrate that VDEA significantly outperforms the state-of-the-art models, especially under the POCDR setting.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
296,237
1809.05467
Discovering Reliable Dependencies from Data: Hardness and Improved Algorithms
The reliable fraction of information is an attractive score for quantifying (functional) dependencies in high-dimensional data. In this paper, we systematically explore the algorithmic implications of using this measure for optimization. We show that the problem is NP-hard, which justifies the usage of worst-case exponential-time as well as heuristic search methods. We then substantially improve the practical performance for both optimization styles by deriving a novel admissible bounding function that has an unbounded potential for additional pruning over the previously proposed one. Finally, we empirically investigate the approximation ratio of the greedy algorithm and show that it produces highly competitive results in a fraction of time needed for complete branch-and-bound style search.
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
true
false
107,794
2205.14655
Network Decoding
We consider the problem of error control in a coded, multicast network, focusing on the scenario where the errors can occur only on a proper subset of the network edges. We model this problem via an adversarial noise, presenting a formal framework and a series of techniques to obtain upper and lower bounds on the network's (1-shot) capacity, improving on the best currently known results. In particular, we show that traditional cut-set bounds are not tight in general in the presence of a restricted adversary, and that the non-tightness of these is caused precisely by the restrictions imposed on the noise (and not, as one may expect, by the alphabet size). We also show that, in sharp contrast with the typical situation within network coding, capacity cannot be achieved in general by combining linear network coding with end-to-end channel coding, not even when the underlying network has a single source and a single terminal. We finally illustrate how network decoding techniques are necessary to achieve capacity in the scenarios we examine, exhibiting capacity-achieving schemes and lower bounds for various classes of networks.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
299,446
2204.04546
Adaptive search area for fast motion estimation
This paper suggests a new method for determining the search area for a motion estimation algorithm based on block matching. The search area is adaptively found in the proposed method for each frame block. This search area is similar to that of the full search (FS) algorithm but smaller for most blocks of a frame. Therefore, the proposed algorithm is analogous to FS in terms of regularity but has much less computational complexity. The temporal and spatial correlations among the motion vectors of blocks are used to find the search area. The matched block is chosen from a rectangular area that the prediction vectors set out. Simulation results indicate that the speed of the proposed algorithm is at least seven times better than the FS algorithm.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
290,697
2502.08255
Principles and Framework for the Operationalisation of Meaningful Human Control over Autonomous Systems
This paper proposes an alignment for the operationalisation of Meaningful Human Control (MHC) for autonomous systems by proposing operational principles for MHC and introducing a generic framework for its application. With a plethora of different seemingly diverging expansions for use of MHC in practice, this work aims to bring alignment and convergence use in practice. The increasing integration of autonomous systems in various domains emphasises a critical need to maintain human control to ensure responsible safety, accountability, and ethical operation of these systems. The concept of MHC offers an ideal concept for the design and evaluation of human control over autonomous systems, while considering human and technology capabilities. Through analysis of existing literature and investigation across various domains and related concepts, principles for the operationalisation of MHC are set out to provide tangible guidelines for researchers and practitioners aiming to implement MHC in their systems. The proposed framework dissects generic components of systems and their subsystems aligned with different agents, stakeholders and processes at different levels of proximity to an autonomous technology. The framework is domain-agnostic, emphasizing the universal applicability of the MHC principles irrespective of the technological context, paving the way for safer and more responsible autonomous systems.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
532,952
2201.10382
On-Device Learning with Cloud-Coordinated Data Augmentation for Extreme Model Personalization in Recommender Systems
Data heterogeneity is an intrinsic property of recommender systems, making models trained over the global data on the cloud, which is the mainstream in industry, non-optimal to each individual user's local data distribution. To deal with data heterogeneity, model personalization with on-device learning is a potential solution. However, on-device training using a user's small size of local samples will incur severe overfitting and undermine the model's generalization ability. In this work, we propose a new device-cloud collaborative learning framework, called CoDA, to break the dilemmas of purely cloud-based learning and on-device learning. The key principle of CoDA is to retrieve similar samples from the cloud's global pool to augment each user's local dataset to train the recommendation model. Specifically, after a coarse-grained sample matching on the cloud, a personalized sample classifier is further trained on each device for a fine-grained sample filtering, which can learn the boundary between the local data distribution and the outside data distribution. We also build an end-to-end pipeline to support the flows of data, model, computation, and control between the cloud and each device. We have deployed CoDA in a recommendation scenario of Mobile Taobao. Online A/B testing results show the remarkable performance improvement of CoDA over both cloud-based learning without model personalization and on-device training without data augmentation. Overhead testing on a real device demonstrates the computation, storage, and communication efficiency of the on-device tasks in CoDA.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
276,974
2005.09530
Differentiable Mapping Networks: Learning Structured Map Representations for Sparse Visual Localization
Mapping and localization, preferably from a small number of observations, are fundamental tasks in robotics. We address these tasks by combining spatial structure (differentiable mapping) and end-to-end learning in a novel neural network architecture: the Differentiable Mapping Network (DMN). The DMN constructs a spatially structured view-embedding map and uses it for subsequent visual localization with a particle filter. Since the DMN architecture is end-to-end differentiable, we can jointly learn the map representation and localization using gradient descent. We apply the DMN to sparse visual localization, where a robot needs to localize in a new environment with respect to a small number of images from known viewpoints. We evaluate the DMN using simulated environments and a challenging real-world Street View dataset. We find that the DMN learns effective map representations for visual localization. The benefit of spatial structure increases with larger environments, more viewpoints for mapping, and when training data is scarce. Project website: http://sites.google.com/view/differentiable-mapping
false
false
false
false
false
false
true
true
false
false
false
true
false
false
false
false
false
false
177,950
2001.07827
Coarse-Grain Cluster Analysis of Tensors with Application to Climate Biome Identification
A tensor provides a concise way to codify the interdependence of complex data. Treating a tensor as a d-way array, each entry records the interaction between the different indices. Clustering provides a way to parse the complexity of the data into more readily understandable information. Clustering methods are heavily dependent on the algorithm of choice, as well as the chosen hyperparameters of the algorithm. However, their sensitivity to data scales is largely unknown. In this work, we apply the discrete wavelet transform to analyze the effects of coarse-graining on clustering tensor data. We are particularly interested in understanding how scale effects clustering of the Earth's climate system. The discrete wavelet transform allows classification of the Earth's climate across a multitude of spatial-temporal scales. The discrete wavelet transform is used to produce an ensemble of classification estimates, as opposed to a single classification. Information theoretic approaches are used to identify important scale lenghts in clustering The L15 Climate Datset. We also discover a sub-collection of the ensemble that spans the majority of the variance observed, allowing for efficient consensus clustering techniques that can be used to identify climate biomes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
161,133
2405.19226
ContextBLIP: Doubly Contextual Alignment for Contrastive Image Retrieval from Linguistically Complex Descriptions
Image retrieval from contextual descriptions (IRCD) aims to identify an image within a set of minimally contrastive candidates based on linguistically complex text. Despite the success of VLMs, they still significantly lag behind human performance in IRCD. The main challenges lie in aligning key contextual cues in two modalities, where these subtle cues are concealed in tiny areas of multiple contrastive images and within the complex linguistics of textual descriptions. This motivates us to propose ContextBLIP, a simple yet effective method that relies on a doubly contextual alignment scheme for challenging IRCD. Specifically, 1) our model comprises a multi-scale adapter, a matching loss, and a text-guided masking loss. The adapter learns to capture fine-grained visual cues. The two losses enable iterative supervision for the adapter, gradually highlighting the focal patches of a single image to the key textual cues. We term such a way as intra-contextual alignment. 2) Then, ContextBLIP further employs an inter-context encoder to learn dependencies among candidates, facilitating alignment between the text to multiple images. We term this step as inter-contextual alignment. Consequently, the nuanced cues concealed in each modality can be effectively aligned. Experiments on two benchmarks show the superiority of our method. We observe that ContextBLIP can yield comparable results with GPT-4V, despite involving about 7,500 times fewer parameters.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
458,800
1601.00901
Joint learning of ontology and semantic parser from text
Semantic parsing methods are used for capturing and representing semantic meaning of text. Meaning representation capturing all the concepts in the text may not always be available or may not be sufficiently complete. Ontologies provide a structured and reasoning-capable way to model the content of a collection of texts. In this work, we present a novel approach to joint learning of ontology and semantic parser from text. The method is based on semi-automatic induction of a context-free grammar from semantically annotated text. The grammar parses the text into semantic trees. Both, the grammar and the semantic trees are used to learn the ontology on several levels -- classes, instances, taxonomic and non-taxonomic relations. The approach was evaluated on the first sentences of Wikipedia pages describing people.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
50,691
2206.11249
GEMv2: Multilingual NLG Benchmarking in a Single Line of Code
Evaluation in machine learning is usually informed by past choices, for example which datasets or metrics to use. This standardization enables the comparison on equal footing using leaderboards, but the evaluation choices become sub-optimal as better alternatives arise. This problem is especially pertinent in natural language generation which requires ever-improving suites of datasets, metrics, and human evaluation to make definitive claims. To make following best model evaluation practices easier, we introduce GEMv2. The new version of the Generation, Evaluation, and Metrics Benchmark introduces a modular infrastructure for dataset, model, and metric developers to benefit from each others work. GEMv2 supports 40 documented datasets in 51 languages. Models for all datasets can be evaluated online and our interactive data card creation and rendering tools make it easier to add new datasets to the living benchmark.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
304,210
2502.03748
Rethinking the Residual Distribution of Locate-then-Editing Methods in Model Editing
Model editing is a powerful technique for updating the knowledge of Large Language Models (LLMs). Locate-then-edit methods are a popular class of approaches that first identify the critical layers storing knowledge, then compute the residual of the last critical layer based on the edited knowledge, and finally perform multi-layer updates using a least-squares solution by evenly distributing the residual from the first critical layer to the last. Although these methods achieve promising results, they have been shown to degrade the original knowledge of LLMs. We argue that residual distribution leads to this issue. To explore this, we conduct a comprehensive analysis of residual distribution in locate-then-edit methods from both empirical and theoretical perspectives, revealing that residual distribution introduces editing errors, leading to inaccurate edits. To address this issue, we propose the Boundary Layer UpdatE (BLUE) strategy to enhance locate-then-edit methods. Sequential batch editing experiments on three LLMs and two datasets demonstrate that BLUE not only delivers an average performance improvement of 35.59\%, significantly advancing the state of the art in model editing, but also enhances the preservation of LLMs' general capabilities. Our code is available at https://github.com/xpq-tech/BLUE.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
530,841
1402.1936
Integer Set Compression and Statistical Modeling
Compression of integer sets and sequences has been extensively studied for settings where elements follow a uniform probability distribution. In addition, methods exist that exploit clustering of elements in order to achieve higher compression performance. In this work, we address the case where enumeration of elements may be arbitrary or random, but where statistics is kept in order to estimate probabilities of elements. We present a recursive subset-size encoding method that is able to benefit from statistics, explore the effects of permuting the enumeration order based on element probabilities, and discuss general properties and possibilities for this class of compression problem.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
30,727
2101.01889
Shape Control of Elastic Objects Based on Implicit Sensorimotor Models and Data-Driven Geometric Features
This paper proposes a general approach to design automatic controls to manipulate elastic objects into desired shapes. The object's geometric model is defined as the shape feature based on the specific task to globally describe the deformation. Raw visual feedback data is processed using classic regression methods to identify parameters of data-driven geometric models in real-time. Our proposed method is able to analytically compute a pose-shape Jacobian matrix based on implicit functions. This model is then used to derive a shape servoing controller. To validate the proposed method, we report a detailed experimental study with robotic manipulators deforming an elastic rod.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
214,478
cmp-lg/9702014
Building a Generation Knowledge Source using Internet-Accessible Newswire
In this paper, we describe a method for automatic creation of a knowledge source for text generation using information extraction over the Internet. We present a prototype system called PROFILE which uses a client-server architecture to extract noun-phrase descriptions of entities such as people, places, and organizations. The system serves two purposes: as an information extraction tool, it allows users to search for textual descriptions of entities; as a utility to generate functional descriptions (FD), it is used in a functional-unification based generation system. We present an evaluation of the approach and its applications to natural language generation and summarization.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,701
2301.07666
DDS: Decoupled Dynamic Scene-Graph Generation Network
Scene-graph generation involves creating a structural representation of the relationships between objects in a scene by predicting subject-object-relation triplets from input data. Existing methods show poor performance in detecting triplets outside of a predefined set, primarily due to their reliance on dependent feature learning. To address this issue, we propose DDS -- a decoupled dynamic scene-graph generation network -- that consists of two independent branches that can disentangle extracted features. The key innovation of the current paper is the decoupling of the features representing the relationships from those of the objects, which enables the detection of novel object-relationship combinations. The DDS model is evaluated on three datasets and outperforms previous methods by a significant margin, especially in detecting previously unseen triplets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
340,983
2002.09964
Quantized Decentralized Stochastic Learning over Directed Graphs
We consider a decentralized stochastic learning problem where data points are distributed among computing nodes communicating over a directed graph. As the model size gets large, decentralized learning faces a major bottleneck that is the heavy communication load due to each node transmitting large messages (model updates) to its neighbors. To tackle this bottleneck, we propose the quantized decentralized stochastic learning algorithm over directed graphs that is based on the push-sum algorithm in decentralized consensus optimization. More importantly, we prove that our algorithm achieves the same convergence rates of the decentralized stochastic learning algorithm with exact-communication for both convex and non-convex losses. Numerical evaluations corroborate our main theoretical results and illustrate significant speed-up compared to the exact-communication methods.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
true
false
false
true
165,242
2404.07543
Content-Adaptive Non-Local Convolution for Remote Sensing Pansharpening
Currently, machine learning-based methods for remote sensing pansharpening have progressed rapidly. However, existing pansharpening methods often do not fully exploit differentiating regional information in non-local spaces, thereby limiting the effectiveness of the methods and resulting in redundant learning parameters. In this paper, we introduce a so-called content-adaptive non-local convolution (CANConv), a novel method tailored for remote sensing image pansharpening. Specifically, CANConv employs adaptive convolution, ensuring spatial adaptability, and incorporates non-local self-similarity through the similarity relationship partition (SRP) and the partition-wise adaptive convolution (PWAC) sub-modules. Furthermore, we also propose a corresponding network architecture, called CANNet, which mainly utilizes the multi-scale self-similarity. Extensive experiments demonstrate the superior performance of CANConv, compared with recent promising fusion methods. Besides, we substantiate the method's effectiveness through visualization, ablation experiments, and comparison with existing methods on multiple test sets. The source code is publicly available at https://github.com/duanyll/CANConv.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
445,876
2401.10934
A New Creative Generation Pipeline for Click-Through Rate with Stable Diffusion Model
In online advertising scenario, sellers often create multiple creatives to provide comprehensive demonstrations, making it essential to present the most appealing design to maximize the Click-Through Rate (CTR). However, sellers generally struggle to consider users preferences for creative design, leading to the relatively lower aesthetics and quantities compared to Artificial Intelligence (AI)-based approaches. Traditional AI-based approaches still face the same problem of not considering user information while having limited aesthetic knowledge from designers. In fact that fusing the user information, the generated creatives can be more attractive because different users may have different preferences. To optimize the results, the generated creatives in traditional methods are then ranked by another module named creative ranking model. The ranking model can predict the CTR score for each creative considering user features. However, the two above stages are regarded as two different tasks and are optimized separately. In this paper, we proposed a new automated Creative Generation pipeline for Click-Through Rate (CG4CTR) with the goal of improving CTR during the creative generation stage. Our contributions have 4 parts: 1) The inpainting mode in stable diffusion is firstly applied to creative generation task in online advertising scene. A self-cyclic generation pipeline is proposed to ensure the convergence of training. 2) Prompt model is designed to generate individualized creatives for different user groups, which can further improve the diversity and quality. 3) Reward model comprehensively considers the multimodal features of image and text to improve the effectiveness of creative ranking task, and it is also critical in self-cyclic pipeline. 4) The significant benefits obtained in online and offline experiments verify the significance of our proposed method.
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
422,820
2103.08780
dictNN: A Dictionary-Enhanced CNN Approach for Classifying Hate Speech on Twitter
Hate speech on social media is a growing concern, and automated methods have so far been sub-par at reliably detecting it. A major challenge lies in the potentially evasive nature of hate speech due to the ambiguity and fast evolution of natural language. To tackle this, we introduce a vectorisation based on a crowd-sourced and continuously updated dictionary of hate words and propose fusing this approach with standard word embedding in order to improve the classification performance of a CNN model. To train and test our model we use a merge of two established datasets (110,748 tweets in total). By adding the dictionary-enhanced input, we are able to increase the CNN model's predictive power and increase the F1 macro score by seven percentage points.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
224,982
2108.04993
LightMove: A Lightweight Next-POI Recommendation for Taxicab Rooftop Advertising
Mobile digital billboards are an effective way to augment brand-awareness. Among various such mobile billboards, taxicab rooftop devices are emerging in the market as a brand new media. Motov is a leading company in South Korea in the taxicab rooftop advertising market. In this work, we present a lightweight yet accurate deep learning-based method to predict taxicabs' next locations to better prepare for targeted advertising based on demographic information of locations. Considering the fact that next POI recommendation datasets are frequently sparse, we design our presented model based on neural ordinary differential equations (NODEs), which are known to be robust to sparse/incorrect input, with several enhancements. Our model, which we call LightMove, has a larger prediction accuracy, a smaller number of parameters, and/or a smaller training/inference time, when evaluating with various datasets, in comparison with state-of-the-art models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
250,172
2011.10033
Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR Segmentation
State-of-the-art methods for large-scale driving-scene LiDAR segmentation often project the point clouds to 2D space and then process them via 2D convolution. Although this corporation shows the competitiveness in the point cloud, it inevitably alters and abandons the 3D topology and geometric relations. A natural remedy is to utilize the3D voxelization and 3D convolution network. However, we found that in the outdoor point cloud, the improvement obtained in this way is quite limited. An important reason is the property of the outdoor point cloud, namely sparsity and varying density. Motivated by this investigation, we propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pat-tern while maintaining these inherent properties. Moreover, a point-wise refinement module is introduced to alleviate the interference of lossy voxel-based label encoding. We evaluate the proposed model on two large-scale datasets, i.e., SemanticKITTI and nuScenes. Our method achieves the 1st place in the leaderboard of SemanticKITTI and outperforms existing methods on nuScenes with a noticeable margin, about 4%. Furthermore, the proposed 3D framework also generalizes well to LiDAR panoptic segmentation and LiDAR 3D detection.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
207,396
2010.05732
On the Complementary Nature of Knowledge Graph Embedding, Fine Grain Entity Types, and Language Modeling
We demonstrate the complementary natures of neural knowledge graph embedding, fine-grain entity type prediction, and neural language modeling. We show that a language model-inspired knowledge graph embedding approach yields both improved knowledge graph embeddings and fine-grain entity type representations. Our work also shows that jointly modeling both structured knowledge tuples and language improves both.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
200,239
2405.10729
Contestable AI needs Computational Argumentation
AI has become pervasive in recent years, but state-of-the-art approaches predominantly neglect the need for AI systems to be contestable. Instead, contestability is advocated by AI guidelines (e.g. by the OECD) and regulation of automated decision-making (e.g. GDPR). In this position paper we explore how contestability can be achieved computationally in and for AI. We argue that contestable AI requires dynamic (human-machine and/or machine-machine) explainability and decision-making processes, whereby machines can (i) interact with humans and/or other machines to progressively explain their outputs and/or their reasoning as well as assess grounds for contestation provided by these humans and/or other machines, and (ii) revise their decision-making processes to redress any issues successfully raised during contestation. Given that much of the current AI landscape is tailored to static AIs, the need to accommodate contestability will require a radical rethinking, that, we argue, computational argumentation is ideally suited to support.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
454,869
2502.07778
Stay-Positive: A Case for Ignoring Real Image Features in Fake Image Detection
Detecting AI generated images is a challenging yet essential task. A primary difficulty arises from the detectors tendency to rely on spurious patterns, such as compression artifacts, which can influence its decisions. These issues often stem from specific patterns that the detector associates with the real data distribution, making it difficult to isolate the actual generative traces. We argue that an image should be classified as fake if and only if it contains artifacts introduced by the generative model. Based on this premise, we propose Stay Positive, an algorithm designed to constrain the detectors focus to generative artifacts while disregarding those associated with real data. Experimental results demonstrate that detectors trained with Stay Positive exhibit reduced susceptibility to spurious correlations, leading to improved generalization and robustness to post processing. Additionally, unlike detectors that associate artifacts with real images, those that focus purely on fake artifacts are better at detecting inpainted real images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
532,750
2311.06989
Creating a Discipline-specific Commons for Infectious Disease Epidemiology
Objective: To create a commons for infectious disease (ID) epidemiology in which epidemiologists, public health officers, data producers, and software developers can not only share data and software, but receive assistance in improving their interoperability. Materials and Methods: We represented 586 datasets, 54 software, and 24 data formats in OWL 2 and then used logical queries to infer potentially interoperable combinations of software and datasets, as well as statistics about the FAIRness of the collection. We represented the objects in DATS 2.2 and a software metadata schema of our own design. We used these representations as the basis for the Content, Search, FAIR-o-meter, and Workflow pages that constitute the MIDAS Digital Commons. Results: Interoperability was limited by lack of standardization of input and output formats of software. When formats existed, they were human-readable specifications (22/24; 92%); only 3 formats (13%) had machine-readable specifications. Nevertheless, logical search of a triple store based on named data formats was able to identify scores of potentially interoperable combinations of software and datasets. Discussion: We improved the findability and availability of a sample of software and datasets and developed metrics for assessing interoperability. The barriers to interoperability included poor documentation of software input/output formats and little attention to standardization of most types of data in this field. Conclusion: Centralizing and formalizing the representation of digital objects within a commons promotes FAIRness, enables its measurement over time and the identification of potentially interoperable combinations of data and software.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
407,152
2406.07253
Hybrid Reinforcement Learning from Offline Observation Alone
We consider the hybrid reinforcement learning setting where the agent has access to both offline data and online interactive access. While Reinforcement Learning (RL) research typically assumes offline data contains complete action, reward and transition information, datasets with only state information (also known as observation-only datasets) are more general, abundant and practical. This motivates our study of the hybrid RL with observation-only offline dataset framework. While the task of competing with the best policy "covered" by the offline data can be solved if a reset model of the environment is provided (i.e., one that can be reset to any state), we show evidence of hardness when only given the weaker trace model (i.e., one can only reset to the initial states and must produce full traces through the environment), without further assumption of admissibility of the offline data. Under the admissibility assumptions -- that the offline data could actually be produced by the policy class we consider -- we propose the first algorithm in the trace model setting that provably matches the performance of algorithms that leverage a reset model. We also perform proof-of-concept experiments that suggest the effectiveness of our algorithm in practice.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
462,965
1911.12683
Moment Propagation of Discrete-Time Stochastic Polynomial Systems using Truncated Carleman Linearization
We propose a method to compute an approximation of the moments of a discrete-time stochastic polynomial system. We use the Carleman linearization technique to transform this finite-dimensional polynomial system into an infinite-dimensional linear one. After taking expectation and truncating the induced deterministic dynamics, we obtain a finite-dimensional linear deterministic system, which we then use to iteratively compute approximations of the moments of the original polynomial system at different time steps. We provide upper bounds on the approximation error for each moment and show that, for large enough truncation limits, the proposed method precisely computes moments for sufficiently small degrees and numbers of time steps. We use our proposed method for safety analysis to compute bounds on the probability of the system state being outside a given safety region. Finally, we illustrate our results on two concrete examples, a stochastic logistic map and a vehicle dynamics under stochastic disturbance.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
155,469
1510.04609
Layer-Specific Adaptive Learning Rates for Deep Networks
The increasing complexity of deep learning architectures is resulting in training time requiring weeks or even months. This slow training is due in part to vanishing gradients, in which the gradients used by back-propagation are extremely large for weights connecting deep layers (layers near the output layer), and extremely small for shallow layers (near the input layer); this results in slow learning in the shallow layers. Additionally, it has also been shown that in highly non-convex problems, such as deep neural networks, there is a proliferation of high-error low curvature saddle points, which slows down learning dramatically. In this paper, we attempt to overcome the two above problems by proposing an optimization method for training deep neural networks which uses learning rates which are both specific to each layer in the network and adaptive to the curvature of the function, increasing the learning rate at low curvature points. This enables us to speed up learning in the shallow layers of the network and quickly escape high-error low curvature saddle points. We test our method on standard image classification datasets such as MNIST, CIFAR10 and ImageNet, and demonstrate that our method increases accuracy as well as reduces the required training time over standard algorithms.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
true
false
false
47,935
2003.09446
Green DetNet: Computation and Memory efficient DetNet using Smart Compression and Training
This paper introduces an incremental training framework for compressing popular Deep Neural Network (DNN) based unfolded multiple-input-multiple-output (MIMO) detection algorithms like DetNet. The idea of incremental training is explored to select the optimal depth while training. To reduce the computation requirements or the number of FLoating point OPerations (FLOPs) and enforce sparsity in weights, the concept of structured regularization is explored using group LASSO and sparse group LASSO. Our methods lead to an astounding $98.9\%$ reduction in memory requirement and $81.63\%$ reduction in FLOPs when compared with DetNet without compromising on BER performance.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
169,049
2001.05459
A Reference Architecture for Plausible Threat Image Projection (TIP) Within 3D X-ray Computed Tomography Volumes
Threat Image Projection (TIP) is a technique used in X-ray security baggage screening systems that superimposes a threat object signature onto a benign X-ray baggage image in a plausible and realistic manner. It has been shown to be highly effective in evaluating the ongoing performance of human operators, improving their vigilance and performance on threat detection. However, with the increasing use of 3D Computed Tomography (CT) in aviation security for both hold and cabin baggage screening a significant challenge arises in extending TIP to 3D CT volumes due to the difficulty in 3D CT volume segmentation and the proper insertion location determination. In this paper, we present an approach for 3D TIP in CT volumes targeting realistic and plausible threat object insertion within 3D CT baggage images. The proposed approach consists of dual threat (source) and baggage (target) volume segmentation, particle swarm optimisation based insertion determination and metal artefact generation. In addition, we propose a TIP quality score metric to evaluate the quality of generated TIP volumes. Qualitative evaluations on real 3D CT baggage imagery show that our approach is able to generate realistic and plausible TIP which are indiscernible from real CT volumes and the TIP quality scores are consistent with human evaluations.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
160,548
1908.08906
$\alpha$ Belief Propagation as Fully Factorized Approximation
Belief propagation (BP) can do exact inference in loop-free graphs, but its performance could be poor in graphs with loops, and the understanding of its solution is limited. This work gives an interpretable belief propagation rule that is actually minimization of a localized $\alpha$-divergence. We term this algorithm as $\alpha$ belief propagation ($\alpha$-BP). The performance of $\alpha$-BP is tested in MAP (maximum a posterior) inference problems, where $\alpha$-BP can outperform (loopy) BP by a significant margin even in fully-connected graphs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
142,678
2412.20608
Conformable Convolution for Topologically Aware Learning of Complex Anatomical Structures
While conventional computer vision emphasizes pixel-level and feature-based objectives, medical image analysis of intricate biological structures necessitates explicit representation of their complex topological properties. Despite their successes, deep learning models often struggle to accurately capture the connectivity and continuity of fine, sometimes pixel-thin, yet critical structures due to their reliance on implicit learning from data. Such shortcomings can significantly impact the reliability of analysis results and hinder clinical decision-making. To address this challenge, we introduce Conformable Convolution, a novel convolutional layer designed to explicitly enforce topological consistency. Conformable Convolution learns adaptive kernel offsets that preferentially focus on regions of high topological significance within an image. This prioritization is guided by our proposed Topological Posterior Generator (TPG) module, which leverages persistent homology. The TPG module identifies key topological features and guides the convolutional layers by applying persistent homology to feature maps transformed into cubical complexes. Our proposed modules are architecture-agnostic, enabling them to be integrated seamlessly into various architectures. We showcase the effectiveness of our framework in the segmentation task, where preserving the interconnectedness of structures is critical. Experimental results on three diverse datasets demonstrate that our framework effectively preserves the topology in the segmentation downstream task, both quantitatively and qualitatively.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
521,281
2202.10669
On Uncertainty Estimation by Tree-based Surrogate Models in Sequential Model-based Optimization
Sequential model-based optimization sequentially selects a candidate point by constructing a surrogate model with the history of evaluations, to solve a black-box optimization problem. Gaussian process (GP) regression is a popular choice as a surrogate model, because of its capability of calculating prediction uncertainty analytically. On the other hand, an ensemble of randomized trees is another option and has practical merits over GPs due to its scalability and easiness of handling continuous/discrete mixed variables. In this paper we revisit various ensembles of randomized trees to investigate their behavior in the perspective of prediction uncertainty estimation. Then, we propose a new way of constructing an ensemble of randomized trees, referred to as BwO forest, where bagging with oversampling is employed to construct bootstrapped samples that are used to build randomized trees with random splitting. Experimental results demonstrate the validity and good performance of BwO forest over existing tree-based models in various circumstances.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
281,614
1603.07513
Achievable DoF Regions of MIMO Networks with Imperfect CSIT
We focus on a two-receiver Multiple-Input-Multiple-Output (MIMO) Broadcast Channel (BC) and Interference Channel (IC) with an arbitrary number of antennas at each node. We assume an imperfect knowledge of local Channel State Information at the Transmitters (CSIT), whose error decays with the Signal-to-Noise-Ratio. With such configuration, we characterize the achievable Degrees-of-Freedom (DoF) regions in both BC and IC, by proposing a Rate-Splitting approach, which divides each receiver's message into a common part and a private part. Compared to the RS scheme designed for the symmetric MIMO case, this scheme is suitable for the general asymmetric deployment with an arbitrary number of antennas at each node. In BC, the proposed block 1) employs an unequal power allocation to the private messages of the two receivers, 2) exploits the benefit of having an extra spatial dimension at one receiver, and 3) is carried out with a Space-Time transmission. These features enable the scheme to yield a greater DoF region than trivially extending the scheme designed for the symmetric case. In IC, we modify the scheme proposed for the BC case by applying a linear transformation to the channel matrices. Such a linear transformation allows us to identify the signal space where the two transmitted signals interfere with each other and propose a proper power allocation policy. The achievable DoF regions are shown to be optimal for some antenna configurations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
53,637
2311.17853
On the Adversarial Robustness of Graph Contrastive Learning Methods
Contrastive learning (CL) has emerged as a powerful framework for learning representations of images and text in a self-supervised manner while enhancing model robustness against adversarial attacks. More recently, researchers have extended the principles of contrastive learning to graph-structured data, giving birth to the field of graph contrastive learning (GCL). However, whether GCL methods can deliver the same advantages in adversarial robustness as their counterparts in the image and text domains remains an open question. In this paper, we introduce a comprehensive robustness evaluation protocol tailored to assess the robustness of GCL models. We subject these models to adaptive adversarial attacks targeting the graph structure, specifically in the evasion scenario. We evaluate node and graph classification tasks using diverse real-world datasets and attack strategies. With our work, we aim to offer insights into the robustness of GCL methods and hope to open avenues for potential future research directions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
411,438
2502.08550
LLMs can implicitly learn from mistakes in-context
Learning from mistakes is a fundamental feature of human intelligence. Previous work has shown that Large Language Models (LLMs) can also learn from incorrect answers when provided with a comprehensive rationale detailing why an answer is wrong or how to correct it. In this work, we examine whether LLMs can learn from mistakes in mathematical reasoning tasks when these explanations are not provided. We investigate if LLMs are able to implicitly infer such rationales simply from observing both incorrect and correct answers. Surprisingly, we find that LLMs perform better, on average, when rationales are eliminated from the context and incorrect answers are simply shown alongside correct ones. This approach also substantially outperforms chain-of-thought prompting in our evaluations. We show that these results are consistent across LLMs of different sizes and varying reasoning abilities. Further, we carry out an in-depth analysis, and show that prompting with both wrong and correct answers leads to greater performance and better generalisation than introducing additional, more diverse question-answer pairs into the context. Finally, we show that new rationales generated by models that have only observed incorrect and correct answers are scored equally as highly by humans as those produced with the aid of exemplar rationales. Our results demonstrate that LLMs are indeed capable of in-context implicit learning.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
533,054
2203.03342
High-Resolution Peak Demand Estimation Using Generalized Additive Models and Deep Neural Networks
This paper covers predicting high-resolution electricity peak demand features given lower-resolution data. This is a relevant setup as it answers whether limited higher-resolution monitoring helps to estimate future high-resolution peak loads when the high-resolution data is no longer available. That question is particularly interesting for network operators considering replacing high-resolution monitoring predictive models due to economic considerations. We propose models to predict half-hourly minima and maxima of high-resolution (every minute) electricity load data while model inputs are of a lower resolution (30 minutes). We combine predictions of generalized additive models (GAM) and deep artificial neural networks (DNN), which are popular in load forecasting. We extensively analyze the prediction models, including the input parameters' importance, focusing on load, weather, and seasonal effects. The proposed method won a data competition organized by Western Power Distribution, a British distribution network operator. In addition, we provide a rigorous evaluation study that goes beyond the competition frame to analyze the models' robustness. The results show that the proposed methods are superior to the competition benchmark concerning the out-of-sample root mean squared error (RMSE). This holds regarding the competition month and the supplementary evaluation study, which covers an additional eleven months. Overall, our proposed model combination reduces the out-of-sample RMSE by 57.4\% compared to the benchmark.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
284,053
2305.15616
Reversible and irreversible bracket-based dynamics for deep graph neural networks
Recent works have shown that physics-inspired architectures allow the training of deep graph neural networks (GNNs) without oversmoothing. The role of these physics is unclear, however, with successful examples of both reversible (e.g., Hamiltonian) and irreversible (e.g., diffusion) phenomena producing comparable results despite diametrically opposed mechanisms, and further complications arising due to empirical departures from mathematical theory. This work presents a series of novel GNN architectures based upon structure-preserving bracket-based dynamical systems, which are provably guaranteed to either conserve energy or generate positive dissipation with increasing depth. It is shown that the theoretically principled framework employed here allows for inherently explainable constructions, which contextualize departures from theory in current architectures and better elucidate the roles of reversibility and irreversibility in network performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
367,691
1204.0867
Optimal Index Codes for a Class of Multicast Networks with Receiver Side Information
This paper studies a special class of multicast index coding problems where a sender transmits messages to multiple receivers, each with some side information. Here, each receiver knows a unique message a priori, and there is no restriction on how many messages each receiver requests from the sender. For this class of multicast index coding problems, we obtain the optimal index code, which has the shortest codelength for which the sender needs to send in order for all receivers to obtain their (respective) requested messages. This is the first class of index coding problems where the optimal index codes are found. In addition, linear index codes are shown to be optimal for this class of index coding problems.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
15,278
2412.02215
Recovering implicit physics model under real-world constraints
Recovering a physics-driven model, i.e. a governing set of equations of the underlying dynamical systems, from the real-world data has been of recent interest. Most existing methods either operate on simulation data with unrealistically high sampling rates or require explicit measurements of all system variables, which is not amenable in real-world deployments. Moreover, they assume the timestamps of external perturbations to the physical system are known a priori, without uncertainty, implicitly discounting any sensor time-synchronization or human reporting errors. In this paper, we propose a novel liquid time constant neural network (LTC-NN) based architecture to recover underlying model of physical dynamics from real-world data. The automatic differentiation property of LTC-NN nodes overcomes problems associated with low sampling rates, the input dependent time constant in the forward pass of the hidden layer of LTC-NN nodes creates a massive search space of implicit physical dynamics, the physics model solver based data reconstruction loss guides the search for the correct set of implicit dynamics, and the use of the dropout regularization in the dense layer ensures extraction of the sparsest model. Further, to account for the perturbation timing error, we utilize dense layer nodes to search through input shifts that results in the lowest reconstruction loss. Experiments on four benchmark dynamical systems, three with simulation data and one with the real-world data show that the LTC-NN architecture is more accurate in recovering implicit physics model coefficients than the state-of-the-art sparse model recovery approaches. We also introduce four additional case studies (total eight) on real-life medical examples in simulation and with real-world clinical data to show effectiveness of our approach in recovering underlying model in practice.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
513,437
2310.05336
GReAT: A Graph Regularized Adversarial Training Method
This paper presents GReAT (Graph Regularized Adversarial Training), a novel regularization method designed to enhance the robust classification performance of deep learning models. Adversarial examples, characterized by subtle perturbations that can mislead models, pose a significant challenge in machine learning. Although adversarial training is effective in defending against such attacks, it often overlooks the underlying data structure. In response, GReAT integrates graph based regularization into the adversarial training process, leveraging the data's inherent structure to enhance model robustness. By incorporating graph information during training, GReAT defends against adversarial attacks and improves generalization to unseen data. Extensive evaluations on benchmark datasets demonstrate that GReAT outperforms state of the art methods in robustness, achieving notable improvements in classification accuracy. Specifically, compared to the second best methods, GReAT achieves a performance increase of approximately 4.87% for CIFAR10 against FGSM attack and 10.57% for SVHN against FGSM attack. Additionally, for CIFAR10, GReAT demonstrates a performance increase of approximately 11.05% against PGD attack, and for SVHN, a 5.54% increase against PGD attack. This paper provides detailed insights into the proposed methodology, including numerical results and comparisons with existing approaches, highlighting the significant impact of GReAT in advancing the performance of deep learning models.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
398,099
2008.00460
Joint Object Contour Points and Semantics for Instance Segmentation
The attributes of object contours has great significance for instance segmentation task. However, most of the current popular deep neural networks do not pay much attention to the object edge information. Inspired by the human annotation process when making instance segmentation datasets, in this paper, we propose Mask Point R-CNN aiming at promoting the neural network's attention to the object boundary. Specifically, we innovatively extend the original human keypoint detection task to the contour point detection of any object. Based on this analogy, we present an contour point detection auxiliary task to Mask R-CNN, which can boost the gradient flow between different tasks by effectively using feature fusion strategies and multi-task joint training. As a consequence, the model will be more sensitive to the edges of the object and can capture more geometric features. Quantitatively, the experimental results show that our approach outperforms vanilla Mask R-CNN by 3.8\% on Cityscapes dataset and 0.8\% on COCO dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
190,017
1909.00906
Hyper-Pairing Network for Multi-Phase Pancreatic Ductal Adenocarcinoma Segmentation
Pancreatic ductal adenocarcinoma (PDAC) is one of the most lethal cancers with an overall five-year survival rate of 8%. Due to subtle texture changes of PDAC, pancreatic dual-phase imaging is recommended for better diagnosis of pancreatic disease. In this study, we aim at enhancing PDAC automatic segmentation by integrating multi-phase information (i.e., arterial phase and venous phase). To this end, we present Hyper-Pairing Network (HPN), a 3D fully convolution neural network which effectively integrates information from different phases. The proposed approach consists of a dual path network where the two parallel streams are interconnected with hyper-connections for intensive information exchange. Additionally, a pairing loss is added to encourage the commonality between high-level feature representations of different phases. Compared to prior arts which use single phase data, HPN reports a significant improvement up to 7.73% (from 56.21% to 63.94%) in terms of DSC.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
143,750
2411.19320
Generalized Gaussian Model for Learned Image Compression
In learned image compression, probabilistic models play an essential role in characterizing the distribution of latent variables. The Gaussian model with mean and scale parameters has been widely used for its simplicity and effectiveness. Probabilistic models with more parameters, such as the Gaussian mixture models, can fit the distribution of latent variables more precisely, but the corresponding complexity will also be higher. To balance between compression performance and complexity, we extend the Gaussian model to the generalized Gaussian model for more flexible latent distribution modeling, introducing only one additional shape parameter, beta, than the Gaussian model. To enhance the performance of the generalized Gaussian model by alleviating the train-test mismatch, we propose improved training methods, including beta-dependent lower bounds for scale parameters and gradient rectification. Our proposed generalized Gaussian model, coupled with the improved training methods, is demonstrated to outperform the Gaussian and Gaussian mixture models on a variety of learned image compression methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
512,199
1811.12811
Millimeter Wave Receiver Comparison Under Energy vs Spectral Efficiency Trade-off
Receivers for mmWave systems suffer from high power consumption in Analog to Digital Converters (ADC), and there is a need to compare the three major receiver architectures: Analog, Hybrid and Digital Combining (AC, HC and DC). Moreover, the specific power consumption figure of merit of ADCs varies significantly between different component designs in the literature, so that comparisons performed for one ADC model - no matter how representative of the state of the art - do not necessarily carry over to other ADC designs with different figures of merit. In this work, we formulate a comparison method between AC, HC and DC that can be easily reproduced with different power consumption parameters and provides all information for receiver architecture selection in a compact chart figure. We also present an interpretation of the receiver selection decision problem as a multi-objective utility optimization to find the best Spectral Efficiency (SE) versus Energy Efficiency (EE) trade-off. We use existing results on the achievable rate of AC, HC and DC systems and an Additive Quantization Noise Model (AQNM) of the ADC capacity degradation. For some example commercial component parameters, we show that the usually held belief that DC requires the highest power is not valid in many cases. Rather, either DC or HC alternatively result in the better SE vs EE trade-off depending strongly on the considered component parameters and on the weight assigned to SE vs EE in the utility maximization.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
115,107
2305.03928
An Overview of AI and Blockchain Integration for Privacy-Preserving
With the widespread attention and application of artificial intelligence (AI) and blockchain technologies, privacy protection techniques arising from their integration are of notable significance. In addition to protecting privacy of individuals, these techniques also guarantee security and dependability of data. This paper initially presents an overview of AI and blockchain, summarizing their combination along with derived privacy protection technologies. It then explores specific application scenarios in data encryption, de-identification, multi-tier distributed ledgers, and k-anonymity methods. Moreover, the paper evaluates five critical aspects of AI-blockchain-integration privacy protection systems, including authorization management, access control, data protection, network security, and scalability. Furthermore, it analyzes the deficiencies and their actual cause, offering corresponding suggestions. This research also classifies and summarizes privacy protection techniques based on AI-blockchain application scenarios and technical schemes. In conclusion, this paper outlines the future directions of privacy protection technologies emerging from AI and blockchain integration, including enhancing efficiency and security to achieve a more comprehensive privacy protection of privacy.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
362,566
1701.04201
Stable Wireless Network Control Under Service Constraints
We consider the design of wireless queueing network control policies with particular focus on combining stability with additional application-dependent requirements. Thereby, we consequently pursue a cost function based approach that provides the flexibility to incorporate constraints and requirements of particular services or applications. As typical examples of such requirements, we consider the reduction of buffer underflows in case of streaming traffic, and energy efficiency in networks of battery powered nodes. Compared to the classical throughput optimal control problem, such requirements significantly complicate the control problem. We provide easily verifyable theoretical conditions for stability, and, additionally, compare various candidate cost functions applied to wireless networks with streaming media traffic. Moreover, we demonstrate how the framework can be applied to the problem of energy efficient routing, and we demonstrate the aplication of our framework in cross-layer control problems for wireless multihop networks, using an advanced power control scheme for interference mitigation, based on successive convex approximation. In all scenarios, the performance of our control framework is evaluated using extensive numerical simulations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
66,825
2106.05086
Design and Implementation of 5G eHealth Systems, Technologies, Use Cases and Future Challenges
Fifth generation (5G) aims to connect massive devices with even higher reliability, lower latency and even faster transmission speed, which are vital for implementing the e-health systems. However, the current efforts on 5G e-health systems are still not enough to accomplish its full blueprint. In this article, we first discuss the related technologies from physical layer, upper layer and cross layer perspectives on designing the 5G e-health systems. We afterwards elaborate two use cases according to our implementations, i.e., 5G e-health systems for remote health and 5G e-health systems for Covid-19 pandemic containment. We finally envision the future research trends and challenges of 5G e-health systems.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
true
239,963